chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
377ee8742a64b26c | Born-Oppenheimer approximation
1. I am confused with a couple of terms usually used in the context of non-radiative transitions. I believe that I understand the concept of diabatic and adiabatic states described in The basic finding is that the coupling terms in the Hamiltonian matrix (in the basis of the diabatic states) result in an avoided crossing.
I want to transfer this finding to the case of the Born-Oppenheimer approximation, which is said to break down in the region around a level-crossing. And this is actually the point where I come across my first problem. When neglecting the first and the off-diagonal elements (closely related to the non-adiabacity operator), do I get diabatic states (case A) or adiabatic states (case B)?
If the case A is valid, the situation as depicted below would seem logical.
Here we face a level crossing, which is regarded as the breakdown of the Born-Oppenheimer approximation. As soon as the off-diagonals elements taken into account again, the avoided crossing would be obtained then.
But from my literature search I get the impression that Born-Oppenheimer approximation leads to adiabatic states. But what is the breakdown of the Born-Oppenheimer approximation then? And what are the non-adiabatic transitions resulting from the non-adiabacity operator in the last figure?
I hope anybody can resolve my problems with this stuff!
2. jcsd
3. DrDu
DrDu 4,641
Science Advisor
To follow your line of argumentation, it would be helpful if you were to write down the Hamiltonian you are talking about. Which one is the first element?
4. Thanks for your interest!
Just to make sure that there aren't any misunderstandings, I will also repeat the main definitions. As usual, the total wavefunction [itex]\Psi({\bf r},{\bf R})[/itex] expanded as a series of electronic wavefunctions [itex]\chi_k({\bf r};{\bf R})[/itex]
[itex]\Psi({\bf r},{\bf R}) = \sum\limits_k \chi_k({\bf r};{\bf R}) \eta_k({\bf R})[/itex].
The electronic Hamiltonian [itex]{\cal H}_\mathrm{e}[/itex] is expressed as
[itex]{\cal H}_\mathrm{e}=T_\mathrm{e}+V_\mathrm{ee}+V_\mathrm{en}+V_\mathrm{nn}[/itex],
which satisfy the electronic Schrödinger equation
[itex]{\cal H}_\mathrm{e} \chi_k({\bf r};{\bf R}) = E_k({\bf R}) \chi_k({\bf r};{\bf R})[/itex].
The full Hamiltonian is defined as
[itex]{\cal H}=T_\mathrm{n}+E_k({\bf R})[/itex],
whose matrix elements should be calculated in the basis of [itex]\eta_k({\bf R})[/itex]. Then the Hamiltonian reads
[itex] {\cal H} =
T_\mathrm{n}+E_1({\bf R}) & 0 & 0 & \cdots\\
0 & T_\mathrm{n}+E_2({\bf R}) & 0 & \cdots\\
0 & 0 & T_\mathrm{n}+E_3({\bf R}) & \cdots\\
\vdots & \vdots & \vdots & \ddots
\end{array}\right) + \underbrace{\left(\begin{array}{ccc}
\tilde{H}_{11} & \tilde{H}_{12} & \tilde{H}_{13} & \cdots\\
\tilde{H}_{21} & \tilde{H}_{22} & \tilde{H}_{23} & \cdots\\
\tilde{H}_{31} & \tilde{H}_{32} & \tilde{H}_{33} \cdots\\
\vdots & \vdots & \vdots & \ddots
\end{array}\right)}_\text{Non-adiabacity operator} [/itex].
The non-adiabacity operator contains the following several elements.
\tilde{H}_{ij} = -\frac{\hbar}{2M}\Big( \underbrace{2\left\langle\chi_i({\bf r};{\bf R})\left|\nabla_\mathrm{n}\right|\chi_j({\bf r};{\bf R})\right\rangle \nabla_\mathrm{n}}_\text{first order} + \underbrace{\left\langle\chi_i({\bf r};{\bf R})\left|\nabla_\mathrm{n}^2\right|\chi_j({\bf r};{\bf R})\right\rangle}_\text{second order} \Big)
If [itex]i\neq j[/itex], the elements of [itex]\tilde{H}_{ij}[/itex] only appear as off-diagonal elements and are neglected in the Born-Oppenheimer approximation. So my question is whether the [itex]E_k({\bf R})[/itex] is already the adiabatic potential energy surface with an avoided crossing OR does the avoided crossing occur only when the off-diagonals are accounted for (which seems to be analogous to the concept of the adiabatic theorem)?
Last edited: Feb 4, 2013
5. DrDu
DrDu 4,641
Science Advisor
The ##E_k(R)## are already the Born-Oppenheimer PES which show avoided crossings. There is a minor distinction between the Born-Oppenheimer PES and the adiabatic ones: The latter include the diagonal part ##\tilde{H}_{ii}(R)##.
The diabatic states are obtained by looking for a unitary transformation which diagonalizes the non-adiabaticity operator for a subset of states (e.g. states 1 and 2).
Conceptually easier are the crude-adiabatic states for which the whole non-adiabatic matrix is diagonal. This is obtained by using electronic states ##\chi_k(r;R_0)## referring to one fixed nuclear position R_0.
There is a famous theorem by Wigner that even adiabatic (or BO-) PES will cross when the space of nuclear displacements is more than onedimensional. In two dimensions, this happens at a point and the PES can be shown to have the form of a conus whence one speaks of a conical intersection. Nonadiabatic couplings become very large there or singular. This is what is meant with the breakdown of the BO approximation.
The best known examples occur in Jahn-Teller systems although conical intersections have turned out to be important in almost any photochemical process.
6. Thanks for your help! You have given me the needed impetus so that I can go deeper into this stuff now.
Have something to add? |
a93f3b83039ed395 | Sign up ×
I am basically a Computer Programmer, but Physics has always fascinated and often baffled me.
I have tried to understand probability density in Quantum Mechanics for many many years. What I understood is that Probability amplitude is the square root of the probability of finding an electron around a nucleus. But the square root of probability does not mean anything in the physical sense. Can any please explain the physical significance of probability amplitude in Quantum Mechanics?
I read the Wikipedia article on probability amplitude many times over. What are those dumbbell shaped images representing?
share|cite|improve this question
6 Answers 6
Part of you problem is
"Probability amplitude is the square root of the probability [...]"
The amplitude is a complex number whose amplitude is the probability. That is $\psi^* \psi = P$ where the asterisk superscript means the complex conjugate.1 It may seem a little pedantic to make this distinction because so far the "complex phase" of the amplitudes has no effect on the observables at all: we could always rotate any given amplitude onto the positive real line and then "the square root" would be fine.
But we can't guarantee to be able to rotate more than one amplitude that way at the same time.
More over, there are two ways to combine amplitudes to find probabilities for observation of combined events.
• When the final states are distinguishable you add probabilities: $P_{dis} = P_1 + P_2 = \psi_1^* \psi_1 + \psi_2^* \psi_2$.
• When the final state are indistinguishable,2 you add amplitudes: $\Psi_{1,2} = \psi_1 + \psi_2$, and $P_{ind} = \Psi_{1,2}^*\Psi_{1,2} = \psi_1^*\psi_1 + \psi_1^*\psi_2 + \psi_2^*\psi_1 + \psi_2^*\psi_2$. The terms that mix the amplitudes labeled 1 and 2 are the "interference terms". The interference terms are why we can't ignore the complex nature of the amplitudes and they cause many kinds of quantum weirdness.
1 Here I'm using a notation reminiscent of a Schrödinger-like formulation, but that interpretation is not required. Just accept $\psi$ as a complex number representing the amplitude for some observation.
2 This is not precise, the states need to be "coherent", but you don't want to hear about that today.
share|cite|improve this answer
Before trying to understand quantum mechanics proper, I think it's helpful to try to understand the general idea of its statistics and probability.
There are basically two kinds of mathematical systems that can yield a nontrivial formalism for probability. One is the kind we're familiar with from everyday life: each outcome has a probability, and those probabilities directly add up to 100%. A coin has two sides, each with 50% probability. $50\% + 50\% = 100\%$, so there you go.
But there's another system of probability, very different from what you and I are used to. It's a system where each event has an associated vector (or complex number), and the sum of the squared magnitudes of those vectors (complex numbers) is 1.
Quantum mechanics works according to this latter system, and for this reason, the complex numbers associated with events are what we often deal with. The wavefunction of a particle is just the distribution of these complex numbers over space. We have chosen to call these numbers the "probability amplitudes" merely as a matter of convenience.
The system of probability that QM follows is very different from what everyday experience would expect us to believe, and this has many mathematical consequences. It makes interference effects possible, for example, and such is only explainable directly with amplitudes. For this reason, amplitudes are physically significant--they are significant because the mathematical model for probability on the quantum scale is not what you and I are accustomed to.
Edit: regarding "just extra stuff under the hood." Here's a more concrete way of talking about the difference between classical and quantum probability.
Let $A$ and $B$ be mutually exclusive events. In classical probability, they would have associated probabilities $p_A$ and $p_B$, and the total probability of them occurring is obtained through addition, $p_{A \cup B} = p_A + p_B$.
In quantum probability, their amplitudes add instead. This is a key difference. There is a total amplitude $\psi_{A \cup B} = \psi_A + \psi_B$. and the squared magnitude of this amplitude--that is, the probability--is as follows:
$$p_{A \cup B} = |\psi_A + \psi_B|^2 = p_A + p_B + (\psi_A^* \psi_B + \psi_A \psi_B^*)$$
There is an extra term, yielding physically different behavior. This quantifies the effects of interference, and for the right choices of $\psi_A$ and $\psi_B$, you could end up with two events that have nonzero individual probabilities, but the probability of the union is zero! Or higher than the individual probabilities.
share|cite|improve this answer
I'm not too happy with the formulation of "mathematical systems that can yield a nontrivial formalism for probability." Firstly, becuase it sounds like you imply that there are only these two "systems", and secondly, because the quantum framework is still one where "each outcome has a probability, and those probabilities directly add up to 100%." It's just extra dynamics under the hood. – NikolajK Mar 21 '13 at 16:13
There are only these two systems. It is mathematically proven that you couldn't have, say, an amplitude that must be raised to the 4th power. There is only classical probability as we know it and the quantum kind. It's not just extra stuff under the hood, either. See my edit. – Muphrid Mar 21 '13 at 16:28
Whatever is mathematically proven must be w.r.t. some postulates and these are not stated. Also, there are the observable who's probabilities sum to 100% (namely the probability to be in any of a total set of eigenstates) and in this sense it's just probability theory with complex dynamics under the hood. I still don't think this is an inappropriate formulation. – NikolajK Mar 21 '13 at 18:23
@Muphrid could you provide a reference for the result that "there are only those two systems"? – glS Feb 3 at 21:54
In quantum mechanics a particle is described by its wave-function $\psi$ (in spatial representation it would for example be $\psi(x,t)$, but I omit the arguments in the following). Observables, like the position $x$ are represented by operators $\hat x$. The mean value of the position of an particle is calculated as $$\int \mathrm{d}x \tilde \psi \hat x \psi.$$
Since $\hat x$ applied to $\psi(x,t)$ just gives the position $x$ times $\psi(x,t)$ we can write the integral as $$\int \mathrm{d}x x \tilde \psi \psi.$$
$\tilde \psi$ is the complex conjugate of $\psi$ and therefore $\tilde \psi \psi=|\psi|^2$.
And finally, since a mean value is usually computed as an integral over the variable times a probability distribution $\rho$ as $$\langle X \rangle_\rho=\int \mathrm{d}X X \rho(X)$$ $|\psi|^2$ can be interpreted as a probability density of finding the particle at some point. E.g. The probability of it being between $a$ and $b$ is $$\int_a^b\mathrm{d}x|\psi|^2$$
So the wave function (which is the solution to the Schrödinger equation that describes the system in question) is a probability amplitude in the sense of the first sentence of the article you linked.
Lastly, the dumbbell shows the area in space where $|\psi|^2$ is larger than some very small number, so basically the regions, where it is not unlikely to find the electron.
share|cite|improve this answer
Have a look at this simplified statement in describing the behavior of a particle in a potential problem:
In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density.
This complex number comes from a solution of a quantum mechanical equation with the boundary conditions of the problem, usually a Schroedinger equation, whose solutions are the "wavefunctions" ψ(x), where x represents the coordinates generically for this argument.
The values taken by a normalized wave function ψ at each point x are probability amplitudes, since |ψ(x)|2 gives the probability density at position x.
To get from the complex numbers to a probability distribution, the probability of finding the particle, we have to take the complex square of the wavefunction ψ*ψ .
So the "probability amplitude" is an alternate definition/identification of "wavefunction", coming after the fact, when it was found experimentally that ψ*ψ gives a probability density distribution for the particle in question.
First one computes ψ and then one can evaluate the probability density ψ*ψ, not the other way around. The significance of ψ is that it is the result of a computation.
I agree it is confusing for non physicists who know probabilities from statistics.
share|cite|improve this answer
I agree with the other answers provided. However, you may find the probability amplitudes more intuitive in the context of the Feynman path integral approach.
Suppose a particle is created at the location $x_1$ at time $0$ and that you want to know the probability for observing it later at some position $x_2$ at time $t$.
Every path $P$ that starts at $x_1$ at time zero and ends at $x_2$ at time $t$ is associated with a (complex) probability amplitude $A_P$. Within the path integral approach, the total amplitude for the process initially described is given by the sum of all these amplitudes:
$A_{\textrm{total}} = \sum_P A_P$
I.e. the sum over all possible paths the particle could take between $x_1$ and $x_2$. These paths interfere coherently, and the probability for observing the particle at $x_2$ at time $t$ is given by the square of the total amplitude:
$\textrm{probability to observe the particle at $x_2$ at time $t$} = |A_{\textrm{total}}|^2 = |\sum_P A_P|^2$
I should note that the Feynman path integral formalism (described above) is actually a special case of a more general approach wherein the amplitudes are associated with processes rather than paths.
Also, a good reference for this is volume 3 of The Feynman Lectures.
share|cite|improve this answer
In quantum mechanics, the amplitue $\psi$, and not the propability $|\psi|^2$, is the quantity which admits the superposition principle. Notice that the dynamics of the physical system (Schrödinger equation) is formulated in terms of and is linear in the evolution of this object. Observe that working with superposition of $\psi$ also permits complex phases $e^{i\theta}$ to play a role. In the same spirit, the overlap of two systems is computed by investigation of the overlap of the amplitudes.
share|cite|improve this answer
All you say is factually correct, but since the question asked for an explanation in layman's terms I think there needs to be more explanation. – user9886 Mar 21 '13 at 16:21
@user9886: The integrals involving position operators are layman's terms? – NikolajK Mar 21 '13 at 18:11
What is the benefit in using complex phases rather than just sine and cosine? – wrongusername Feb 27 '14 at 2:57
Your Answer
|
f785a385cfc88393 | Checked content
Schrödinger equation
Related subjects: Physics
About this schools Wikipedia selection
In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics.
In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe atomic and subatomic systems, electrons and atoms, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger, who discovered it in 1926.
Schrödinger's equation can be mathematically transformed into Heisenberg's matrix mechanics, and into the Feynman's path integral formulation. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is less severe in Heisenberg's formulation and completely absent in the path integral.
The Schrödinger equation
There are several different Schrödinger equations.
General quantum system
For a general quantum system:
i\hbar {d\Psi \over dt} = \hat H \Psi
• \psi is the wavefunction, which is the probability amplitude for different configurations.
• \scriptstyle \hbar is Planck's constant over 2\pi, and it can be set to a value of 1 when using natural units.
• \scriptstyle \hat H is the Hamiltonian operator.
Single particle in three dimensions
For a single particle in three dimensions:
i\hbar\frac{\partial}{\partial t} \psi = -\frac{\hbar^2}{2m}\nabla^2\psi + V(x,y,z)\psi
• \psi is the wavefunction, which is the amplitude for the particle to have a given position at any given time.
• m is the mass of the particle.
• V(x,y,z) is the potential energy the particle has at each position.
Historical background and development
Einstein interpreted Planck's quanta as photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, a mysterious wave-particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in relativity, it followed that the momentum of a photon is proportional to its wavenumber.
DeBroglie hypothesized that this is true for all particles, for electrons as well as photons, that the energy and momentum of an electron are the frequency and wavenumber of a wave. Assuming that the waves travel roughly along classical paths, he showed that they form standing waves only for certain discrete frequencies, discrete energy levels which reproduced the old quantum condition.
Following up on these ideas, Schrödinger decided to find a proper wave equation for the electron. He was guided by Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system--- the trajectories of light rays become sharp tracks which obey a principle of least action. Hamilton believed that mechanics was the zero-wavelength limit of wave propagation, but did not formulate an equation for those waves. This is what Schrödinger did, and a modern version of his reasoning is reproduced in the next section. The equation he found is (in natural units):
i \frac{\partial}{\partial t}\psi=-\frac{1}{2m}\nabla^2\psi + V(x)\psi
Using this equation, Schrödinger computed the spectral lines for hydrogen by treating a hydrogen atom's single negatively charged electron as a wave, \psi\;, moving in a potential well, V, created by the positively charged proton. This computation reproduced the energy levels of the Bohr model.
But this was not enough, since Sommerfeld had already seemingly correctly reproduced relativistic corrections. Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein-Gordon equation in a Coulomb potential:
(E + {e^2\over r} )^2 \psi = - \nabla^2\psi + m^2 \psi
He found the standing-waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin with a lover.
While there, Schrödinger decided that the earlier nonrelativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. He put together his wave equation and the spectral analysis of hydrogen in a paper in 1926.. The paper was enthusiastically endorsed by Einstein, who saw the matter-waves as the visualizable antidote to what he considered to be the overly formal matrix mechanics.
The Schrödinger equation tells you the behaviour of \psi , but does not say what \psi is. Schrödinger tried unsuccessfully, in his fourth paper, to interpret it as a charge density. In 1926 Max Born, just a few days after Schrödinger's fourth and final paper was published, successfully interpreted \psi as a probability amplitude. Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities; like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory, Schrödinger was never reconciled to the Copenhagen interpretation.
Short Heuristic Derivation
(1) The total energy E of a particle is
E= T + V = \frac{p^2}{2m}+V
This is the classical expression for a particle with mass m where the total energy E is the sum of the kinetic energy, \frac{p^2}{2m}, and the potential energy V. The momentum of the particle is p, or mass times velocity. The potential energy is assumed to vary with position, and possibly time as well.
Note that the energy E and momentum p appear in the following two relations:
(2) Einstein's light quanta hypothesis of 1905, which asserts that the energy E of a photon is proportional to the frequency f of the corresponding electromagnetic wave:
E = h f = {h \over 2\pi} (2\pi f) = \hbar \omega \;
where the frequency f of the quanta of radiation (photons) are related by Planck's constant h,
and \omega = 2\pi f\; is the angular frequency of the wave.
(3) The de Broglie hypothesis of 1924, which states that any particle can be associated with a wave, represented mathematically by a wavefunction Ψ, and that the momentum p of the particle is related to the wavelength λ of the associated wave by:
p = { h \over \lambda } = { h \over 2\pi } {2\pi \over \lambda} = \hbar k\;
where \lambda\, is the wavelength and k = 2\pi / \lambda\; is the wavenumber of the wave.
Expressing p and k as vectors, we have
\mathbf{p} =\hbar \mathbf{k}\;
Expressing the wave function as a complex plane wave
Schrödinger's great insight, late in 1925, was to express the phase of a plane wave as a complex phase factor:
\Psi(\mathbf{x},t) = Ae^{i(\mathbf{k}\cdot\mathbf{x}- \omega t)} = Ae^{i \mathbf{k} \cdot \mathbf{x}}e^{-i\omega t} = \psi(\mathbf{x}) \phi(t)
\psi(\mathbf{x}) = Ae^{i \mathbf{k} \cdot \mathbf{x}}
\phi(t) = e^{-i\omega t} \,
and to realize that since
\frac{\partial}{\partial t} \Psi = -i\omega \Psi
E \Psi = \hbar \omega \Psi = i\hbar\frac{\partial}{\partial t} \Psi
and similarly since:
\frac{\partial}{\partial x} \Psi = i k_x \Psi
p_x \Psi = \hbar k_x \Psi = -i\hbar\frac{\partial}{\partial x} \Psi
and hence:
so that, again for a plane wave, he obtained:
p^2 \Psi = (p_x^2 + p_y^2 + p_z^2) \Psi = -\hbar^2\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}\right) \Psi = -\hbar^2\nabla^2 \Psi
And by inserting these expressions for the energy and momentum into the classical formula we started with we get Schrödinger's famed equation for a single particle in the 3-dimensional case in the presence of a potential V:
Longer Discussion
The particle is described by a wave, and in natural units, the frequency is the energy E of the particle, while the momentum p is the wavenumber k. These are not two separate assumptions, because of special relativity.
E= \omega \;\;\;\;P=k \,
The total energy is the same function of momentum and position as in classical mechanics:
E = T(p) + V(x) = {p^2\over 2m} + V(x)
where the first term T(p) is the kinetic energy and the second term V(x) is the potential energy.
Schrödinger required that a Wave packet at position x with wavenumber k will move along the trajectory determined by Newton's laws in the limit that the wavelength is small.
Consider first the case without a potential, V=0.
E = {1\over 2m} (p_x^2+p_y^2 + p_z^2)
\omega = {1\over 2m} (k_x^2 + k_y^2 + k_z^2)
So that a plane wave with the right energy/frequency relationship obeys the free Schrodinger equation:
i {\partial \over \partial t} \psi = -{1 \over 2m} ( {\partial^2 \psi \over \partial x^2} + {\partial^2 \psi \over \partial y^2} + {\partial^2 \psi \over \partial z^2} )
and by adding together plane waves, you can make an arbitrary wave.
When there is no potential, a wavepacket should travel in a straight line at the classical velocity. The velocity v of a wavepacket is:
v = {\partial \omega \over \partial k } = {\partial \over \partial k} { k^2\over 2m} = { k\over m}
which is the momentum over the mass as it should be. This is one of Hamilton's equations from mechanics:
{dx \over dt} = {\partial H \over \partial p}
after identifying the energy and momentum of a wavepacket as the frequency and wavenumber.
To include a potential energy, consider that as a particle moves the energy is conserved, so that for a wavepacket with approximate wavenumber k at approximate position x the quantity
{ k^2\over 2m } + V(x)
must be constant. The frequency doesn't change as a wave moves, but the wavenumber does. So where there is a potential energy, it must add in the same way:
i{\partial \over \partial t}\psi=-{1\over 2m}\nabla^2\psi + V(x)\psi
This is the time dependent schrodinger equation. It is the equation for the energy in classical mechanics, turned into a differential equation by substituting:
E\rightarrow i{\partial\over \partial t} \;\;\;\;\;\; p\rightarrow -i{\partial\over \partial x}
Schrödinger studied the standing wave solutions, since these were the energy levels. Standing waves have a complicated dependence on space, but vary in time in a simple way:
\psi(x,t) = \psi(x) e^{- iEt }
substituting, the time-dependent equation becomes the standing wave equation:
{E}\psi(x) = - {1\over 2m} \nabla^2 \psi(x) + V(x) \psi(x)
Which is the original time-independent Schrodinger equation.
In a potential gradient, the k-vector of a short-wavelength wave must vary from point to point, to keep the total energy constant. Sheets perpendicular to the k-vector are the wavefronts, and they gradually change direction, because the wavelength is not everywhere the same. A wavepacket follows the shifting wavefronts with the classical velocity, with the acceleration equal to the force divided by the mass.
an easy modern way to verify that Newton's second law holds for wavepackets is to take the Fourier transform of the time dependent Schrodinger equation. For an arbitrary polynomial potential this is called the Schrodinger equation in the momentum representation:
i{\partial \psi(p) \over \partial t} = {p^2\over 2m} \psi(p) + V(i{\partial\over \partial x}) \psi(p)
The group-velocity relation for the fourier trasformed wave-packet gives the second of Hamilton's equations.
{dp \over dt} = -{\partial H \over \partial x}
There are several equations which go by Schrodinger's name:
Time Dependent Equation
i{d\over dt} \psi = \hat H \psi
Where \hat H is a linear operator acting on the wavefunction \psi. \hat H takes as input one \psi and produces another in a linear way, a function-space version of a matrix multiplying a vector. For the specific case of a single particle in one dimension moving under the influence of a potential V:
i{d\over dt} \psi = -{1\over 2m} {\partial^2 \over \partial x^2} \psi(x) + V(x) \psi(x)
and the operator H can be read off:
it is a combination of the operator which takes the second derivative, and the operator which pointwise multiplies \psi by V(x). When acting on \psi it reproduces the right hand side.
and for N particles, the difference is that the wavefunction is in 3N-dimensional configuration space, the space of all possible particle positions.
i{d\over dt} \psi(x_1,...,x_n) = (-{\nabla_1^2\over 2m_1} - {\nabla_2^2 \over 2m_2} ... - {\nabla_N^2\over 2m_N} ) \psi + V(x_1,..,x_N)\psi
Time Independent Equation
This is the equation for the standing waves, the eigenvalue equation for H. In abstract form, for a general quantum system, it is written:
E \psi = \hat H \psi
For a particle in one dimension, with the mass absorbed into rescaling either time or space:
E \psi = -{\partial^2 \psi \over \partial x^2} + V(x)\psi
But there is a further restriction--- the solution must not grow at infinity, so that it has a finite norm:
|| \psi ||^2 = \int_x \psi^*(x) \psi(x)
For example, when there is no potential, the equation reads:
- E \psi = {\partial^2 \psi \over \partial x^2}
which has oscillatory solutions for E>0 (the C's are arbitrary constants):
\psi_E(x) = C_1 e^{i\sqrt{E} x} + C_2 e^{-i\sqrt{E}x}
and exponential solutions for E<0
\psi_{-|E|}(x) = C_1 e^{\sqrt{|E|} x} + C_2 e^{-\sqrt{|E|} x}
For a constant potential V the solution is oscillatory for E>V and exponential for E quantum tunneling. If the potential V grows at infinity, the motion is classically confined to a finite region, which means that in quantum mechanics every solution becomes an exponential far enough away. The condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies.
Energy Eigenstates
A solution \scriptstyle \psi_E(x) of the time independent equation is called an energy eigenstate with energy E:
H\psi_E = E \psi_E
To find the time dependence of the state, consider starting the time-dependent equation with an initial condition \psi_E(x). The time derivative at t=0 is everywhere proportional to the value:
i {d\over dt} \psi(x) = H \psi(x) = E \psi(x)
So that at first the whole function just gets rescaled, and it maintains the property that its time derivative is proportional to itself. So for all times,
\psi(x,t) = A(t) \psi_E(x)
i {dA \over dt } = - E A
So that the solution of the time-dependent equation with this initial condition is:
\psi(x,t) = \psi_E(x) e^{-iEt}
or, with explicit \scriptstyle\hbars.
\psi(x,t) = \psi_E(x) e^{-i{E t\over\hbar}}
This is a restatement of the fact that solutions of the time-independent equation are the standing wave solutions of the time dependent equation. They only get multiplied by a phase as time goes by, and otherwise are unchanged.
First Order in Time
The Schrodinger equation describes the time evolution of a quantum state, and must determine the future value from the present value. A classical field equation can be second order in time derivatives, the classical state can include the time derivative of the field. But a quantum state is a full description of a system, so that the Schrodinger equation is always first order in time.
The Schrödinger equation is linear in the wavefunction: if \psi(x, t) and \phi(x,t) are solutions to the time dependent equation, then so is a \psi + b \phi, where a and b are any complex numbers.
In quantum mechanics, the time evolution of a quantum state is always linear, for fundamental reasons. Although there are nonlinear versions of the Schrodinger equation, these are not equations which describe the evolution of a quantum state, but classical field equations like Maxwell's equations or the Klein-Gordon equation.
The Schrodinger equation itself can be thought of as the equation of motion for a classical field not for a wavefunction, and taking this point of view, it describes a coherent wave of nonrelativistic matter, a wave of a Bose condensate or a superfluid with a large indefinite number of particles and a definite phase and amplitude.
Real Eigenstates
The time-independent equation is also linear, but in this case linearity has a slightly different meaning. If two wavefunctions \psi_1 and \psi_2 are solutions to the time-independent equation with the same energy E, then any linear combination of the two is a solution with energy E. Two different solutions with the same energy are called degenerate.
\hat H (a\psi_1 + b \psi_2 ) = ( a \hat H \psi_1 + b \hat H \psi_2) = E (a \psi_1 + b\psi_2)
In an arbitrary potential, there is one obvious degeneracy: if a wavefunction \psi solves the time-independent equation, so does \psi^*. By taking linear combinations, the real and imaginary part of \psi are each solutions. So that restricting attention to real valued wavefunctions does not affect the time-independent eigenvalue problem.
In the time-dependent equation, complex conjugate waves move in opposite directions. Given a solution to the time dependent equation \psi(x,t), the replacement:
\psi(x,t) \rightarrow \psi^*(x,-t)
produces another solution, and is the extension of the complex conjugation symmetry to the time-dependent case. The symmetry of complex conjugation is called time-reversal.
Unitary Time Evolution
The Schrodinger equation is Unitary, which means that the total norm of the wavefunction, the sum of the squares of the value at all points:
\int_x \psi^*(x) \psi(x) = \langle \psi | \psi \rangle
has zero time derivative.
The derivative of \psi^* is according to the complex conjugate equations
-i \hbar {d\over dt} \psi^* = \hat H^\dagger \psi^*
where the operator H^\dagger is defined as the continuous analog of the Hermitian conjugate,
\langle \eta H^{\dagger} | \psi \rangle = \langle \eta | H \psi \rangle
For a discrete basis, this just means that the matrix elements of the linear operator H obey:
\hat H_{ij} = \hat H^*_{ji}
The derivative of the inner product is:
{d\over dt} \langle \psi | \psi \rangle = i \langle \psi \hat H^\dagger | \psi \rangle - i \langle \psi| H \psi \rangle
and is proportional to the imaginary part of H. If H has no imaginary part, if it is self-adjoint, then the probability is conserved. This is true not just for the Schrodinger equation as written, but for the Schrodinger equation with nonlocal hopping:
i{d \over dt} \psi(x) = \int_y H(x,y) \psi(y)
so long as:
H(x,y) = H(y,x)^*
the particular choice:
H(x,y) = - {1\over 2m} \nabla_x^2 \delta(x-y) + V(x) \delta(x-y)
reproduces the local hopping in the ordinary Schrodinger equation. On a discrete lattice approximation to a continuous space, H(x,y) has a simple form:
whenever x and y are nearest neighbors. On the diagonal
H(x,x) = +{n\over 2m} + V(x)
where n is the number of nearest neighbors.
Positive Energies
The solutions of the Schrodinger equation in a potential which is bounded below have a frequency which is bounded below. For any linear operator \scriptstyle \hat A, the smallest eigenvector minimizes the quantity:
\langle \psi |A|\psi \rangle
over all \psi which are normalized:
\|\psi\|^2 = \int_x |\psi(x)|^2 =\langle \psi | \psi \rangle = 1
by the variational principle.
The value of the energy for the Schrodinger Hamiltonian is then the minimum value of:
\langle \psi|H|\psi\rangle = \int_x \psi^*(x) (- \nabla^2 \psi + V(x)\psi) = \int_x |\nabla \psi|^2 + V(x) |\psi|^2 dx
after an integration by parts, and the right hand side is positive definite when V is positive.
Positive Definite Nondegenerate Ground State
Suppose for contradiction that \psi is a lowest energy state and has a sign change, then \eta(x)=|\psi(x)|, the absolute value of \psi obeys
V(x)|\eta|^2 = V(x)|\psi|^2\,
everywhere, and
|\nabla \eta|^2 = |\nabla \psi|^2
except for a set of measure zero. So \eta is also a minimimum of the integral, and it has the same value as \psi. But by smoothing out the bends at the sign change, the gradient contribution to the integral is reduced while the potential energy is hardly altered, so the energy of \eta can be reduced, which is a contradiction.
The lack of sign changes also proves that the ground state is nondegenerate, since if there were two ground states with energy E which were not proportional to each other, a linear combination of the two would also be a ground state with a zero.
These properties allow the analytic continuation of the Schrodinger equation to be identified as a stochastic process, which can be represented by a path integral.
Local Conservation of Probability
The probability density of a particle is \psi^*(x)\psi(x). The probability flux is defined as:
\mathbf{j} = {\hbar \over m} \cdot {1 \over {2 \mathrm{i}}} \left( \psi ^{*} \nabla \psi - \psi \nabla \psi^{*} \right) = {\hbar \over m} \operatorname{Im} \left( \psi ^{*} \nabla \psi \right)
in units of (probability)/(area × time).
The probability flux satisfies the continuity equation:
{ \partial \over \partial t} P\left(x,t\right) + \nabla \cdot \mathbf{j} = 0
where P\left(x, t\right) is the probability density and measured in units of (probability)/(volume) = r−3. This equation is the mathematical equivalent of the probability conservation law.
For a plane wave:
\psi (x,t) = \, A e^{ \mathrm{i} (k x - \omega t)}
j\left(x,t\right) = \left|A\right|^2 {\hbar k \over m}
So that not only is the probability of finding the particle the same everywhere, but the probability flux is as expected from an object moving at the classical velocity p/m.
The reason that the Schrodinger equation admits a probability flux is because all the hopping is local and forward in time.
Heisenberg Observables
There are many linear operators which act on the wavefunction, each one defines a Heisenberg matrix when the energy eigenstates are discrete. For a single particle, the operator which takes the derivative of the wavefunction in a certain direction:
\hat p = {i\hbar {\partial \over \partial x}}
Is called the momentum operator. Multiplying operators is just like multiplying matrices, the product of A and B acting on \psi is A acting on the output of B acting on \psi.
An eigenstate of p obeys the equation:
\hat p \psi = k \psi
for a number k, and for a normalizable wavefunction this restricts k to be real, and the momentum eigenstate is a wave with frequency k.
\psi(x) = e^{i kx}
The position operator x multiplies each value of the wavefunction at the position x by x:
\hat x(\psi) = x\psi
So that in order to be an eigenstate of x, a wavefunction must be entirely concentrated at one point:
\hat x \delta(x-x_0) = x_0 \delta(x-x_0)
In terms of p, the Hamiltonian is:
\hat H = {\hat p^2\over 2m} + V(x)
It is easy to verify that p acting on x acting on psi:
\hat p (\hat x( \psi)) = -i \hbar {\partial \over \partial x}( x \psi) = -i x {\partial \over \partial x}\psi -i\hbar \psi
while x acting on p acting on psi reproduces only the first term:
\hat x(\hat p (\psi)) = -i \hbar {\partial \over \partial x} \psi
so that the difference of the two is not zero:
( x p - p x ) \psi = i \hbar \psi
or in terms of operators:
since the time derivative of a state is:
while the complex conjugate is
- i{d\over dt} \psi^* = \hat H \psi^*
The time derivative of a matrix element
{d\over dt} \langle \eta | A |\psi \rangle = - \eta \hat H A \psi + \eta AH \psi = [H,A]
obeys the Heisenberg equation of motion. This establishes the equivalence of the Schrodinger and Heisenberg formalisms, ignoring the mathematical fine points of the limiting procedure for continuous space.
Correspondence principle
The Schrödinger equation satisfies the correspondence principle. In the limit of small wavelength wavepackets, it reproduces Newton's laws. This is easy to see from the equivalence to matrix mechanics.
All operators in Heisenberg's formalism obey the quantum analog of Hamilton's equations:
{dA \over dt} = -i\hbar (AH - HA)
So that in particular, the equations of motion for the X and P operators are:
{dX \over dt} = {P\over m}
{dP \over dt} = - {\partial V \over \partial x}
in the Schrodinger picture, the interpretation of this equation is that it gives the time rate of change of the matrix element between two states when the states change with time. Taking the expectation value in any state shows that Newton's laws hold not only on average, but exactly, for the quantities:
\langle X\rangle = \int_x \psi^*(x)\psi(x) X = \langle \psi|X|\psi \rangle
\langle P\rangle = \int_x \psi^*(x) i\hbar {\partial \psi \over \partial x}(x) = \langle \psi |P|\psi\rangle
The Schrödinger equation does not take into account relativistic effects, as a wave equation, it is invariant under a Galilean transformation, but not under a Lorentz transformation. But in order to include relativity, the physical picture must be altered in a radical way.
A naive generalization of the Schrodinger equation uses the relativistic mass-energy relation (in natural units):
E^2 = P^2 + m^2
to produce the differential equation:
- {\partial^2 \over \partial t^2}\psi = - \nabla^2 \psi + m^2 \psi
which is relativistically invariant, but second order in \psi, and so cannot be an equation for the quantum state. This equation also has the property that there are solutions with both positive and negative frequency, a plane wave solution obeys:
\omega^2 - k^2 = m^2
which has two solutions, one with positive frequency the other with negative frequency. This is a disaster for quantum mechanics, because it means that the energy is unbounded below.
A more sophisticated attempt to solve this problem uses a first order wave equation, the Dirac equation, but again there are negative energy solutions. In order to solve this problem, it is essential to go to a multiparticle picture, and to consider the wave equations as equations of motion for a quantum field, not for a wavefunction.
The reason is that relativity is incompatible with a single particle picture. A relativistic particle cannot be localized to a small region without the particle number becoming indefinite. When a particle is localized in a box of length L, the momentum is uncertain by an amount roughly proportional to h/L by the uncertainty principle. This leads to an energy uncertainty of hc/L, when |p| is large enough so that the mass of the particle can be neglected. This uncertainty in energy is equal to the mass-energy of the particle when
L = {\hbar \over mc}
and this is called the Compton wavelength. Below this length, it is impossible to localize a particle and be sure that it stays a single particle, since the energy uncertainty is large enough to produce more particles from the vacuum by the same mechanism that localizes the original particle.
But there is another approach to relativistic quantum mechanics which does allow you to follow single particle paths, and it was discovered within the path-integral formulation. If the integration paths in the path integral include paths which move both backwards and forwards in time as a function of their own proper time, it is possible to construct a purely positive frequency wavefunction for a relativistic particle. This construction is appealing, because the equation of motion for the wavefunction is exactly the relativistic wave equation, but with a nonlocal constraint that separates the positive and negative frequency solutions. The positive frequency solutions travel forward in time, the negative frequency solutions travel backwards in time. In this way, they both analytically continue to a statistical field correlation function, which is also represented by a sum over paths. But in real space, they are the probability amplitudes for a particle to travel between two points, and can be used to generate the interaction of particles in a point-splitting and joining framework. The relativistic particle point of view is due to Richard Feynman.
Feynman's method also constructs the theory of quantized fields, but from a particle point of view. In this theory, the equations of motion for the field can be interpreted as the equations of motion for a wavefunction only with caution--- the wavefunction is only defined globally, and in some way related to the particle's proper time. The notion of a localized particle is also delicate--- a localized particle in the relativistic particle path integral corresponds to the state produced when a local field operator acts on the vacuum, and exacly which state is produced depends on the choice of field variables.
Some general techniques are:
• Perturbation theory
• The variational principle
• Quantum Monte Carlo methods
• Density functional theory
• The WKB approximation and semi-classical expansion
In some special cases, special methods can be used:
• List of quantum mechanical systems with analytical solutions
• Hartree-Fock method and post Hartree-Fock methods
• Discrete delta-potential method
Free Schrödinger equation
When the potential is zero, the Schrödinger equation is linear with constant coefficients:
i \frac{\partial \psi}{\partial t}=-{1\over 2m}\nabla^2\psi
where \scriptstyle \hbar has been set to 1. The solution \psi_t(x) for any initial condition \psi_0(x) can be found by Fourier transforms. Because the coefficients are constant, an initial plane wave:
\psi_0(x) = A e^{i k x}
stays a plane wave. Only the coefficient changes. Substituting:
{dA \over dt} = -{i k^2 \over 2m} A
So that A is also oscillating in time:
A(t) = A e^{- i {k^2 \over 2m} t}
and the solution is:
\psi_t(x) = A e^{i k x - i \omega t}
Where \omega=k^2/2m, a restatement of DeBroglie's relations.
To find the general solution, write the initial condition as a sum of plane waves by taking its Fourier transform:
\psi_0(x) = \int_k \psi(k) e^{ikx}
The equation is linear, so each plane waves evolves independently:
\psi_t(x) = \int_k \psi(k)e^{-i\omega t} e^{ikx}
Which is the general solution. When complemented by an effective method for taking Fourier transforms, it becomes an efficient algorithm for finding the wavefunction at any future time--- Fourier transform the initial conditions, multiply by a phase, and transform back.
Gaussian Wavepacket
An easy and instructive example is the Gaussian wavepacket:
\psi_0(x) = e^{-x^2 / 2a}
where a is a positive real number, the square of the width of the wavepacket. The total normalization of this wavefunction is:
\langle \psi|\psi\rangle = \int_x \psi^* \psi = \sqrt{\pi a}
The Fourier transform is a Gaussian again in terms of the wavenumber k:
\psi_0(k) = (2\pi a)^{d/2} e^{- a k^2/2}
With the physics convention which puts the factors of 2\pi in Fourier transforms in the k-measure.
\psi_0(x) = \int_k \psi_0(k) e^{-ikx} = \int {d^dk \over (2\pi)^d} \psi_0(k) e^{-ikx}
\psi_t(k) = (2\pi a)^{d/2} e^{- a { k^2\over 2} - it {k^2\over 2m}} = (2\pi a)^{d/2} e^{-(a+it/m){k^2\over 2}}
The inverse Fourier transform is still a Gaussian, but the parameter a has become complex, and there is an overall normalization factor.
\psi_t(x) = \left({a \over a + i t/m}\right)^{d/2} e^{- {x^2\over 2(a + i t/m)} }
The branch of the square root is determined by continuity in time--- it is the value which is nearest to the positive square root of a. It is convenient to rescale time to absorb m, replacing t/m by t.
The integral of \psi over all space is invariant, because it is the inner product of \psi with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy state, with wavefunction \eta(x), the inner product:
\langle \eta | \psi \rangle = \int_x \eta(x) \psi_t(x)
only changes in time in a simple way: its phase rotates with a frequency determined by the energy of \eta. When \eta has zero energy, like the infinite wavelength wave, it doesn't change at all.
The sum of the absolute square of \psi is also invariant, which is a statement of the conservation of probability. Explicitly in one dimension:
|\psi|^2 = \psi\psi^* = {a \over \sqrt{a^2+t^2} } e^{-{x^2 a \over a^2 + t^2}}
Which gives the norm:
\int |\psi|^2 = \sqrt{\pi a}
which has preserved its value, as it must.
The width of the Gaussian is the interesting quantity, and it can be read off from the form of |\psi^2|:
\sqrt{a^2 + t^2 \over a}
The width eventually grows linearly in time, as \scriptstyle t/\sqrt{a}. This is wave-packet spreading--- no matter how narrow the initial wavefunction, a Schrodinger wave eventually fills all of space. The linear growth is a reflection of the momentum uncertainty--- the wavepacket is confined to a narrow width \scriptstyle \sqrt{a} and so has a momentum which is uncertain by the reciprocal amount \scriptstyle 1/\sqrt{a}, a spread in velocity of \scriptstyle 1/m\sqrt{a}, and therefore in the future position by \scriptstyle t/m\sqrt{a}, where the factor of m has been restored by undoing the earlier rescaling of time.
Galilean Invariance
Galilean boosts are transformations which look at the system from the point of view of an observer moving with a steady velocity -v. A boost must change the physical properties of a wavepacket in the same way as in classical mechanics:
p'= p + mv
x'= x + vt
So that the phase factor of a free Schrodinger plane wave:
p x - E t = (p' - mv)(x' - vt) - {(p'-mv)^2\over 2} t = p' x' + E' t + m v x - {mv^2\over 2}t
is only different in the boosted coordinates by a phase which depends on x and t, but not on p.
An arbitrary superposition of plane wave solutions with different values of p is the same superposition of boosted plane waves, up to an overall x,t dependent phase factor. So any solution to the free Schrodinger equation, \psi_t(x), can be boosted into other solutions:
\psi'_t(x) = \psi_t(x + vt) e^{ i mv x - i {mv^2\over 2}t}
Boosting a constant wavefunction produces a plane-wave. More generally, boosting a plane-wave:
\psi_t(x) = e^{ipx - i {p^2\over 2m} t}
produces a boosted wave:
\psi'_t(x) = e^{ i p(x + vt) - i{p^2\over 2m}t + imv x - i {mv^2\over 2}t} = e^{i(p+mv)x + i {(p+mv)^2\over 2m}t }
Boosting the spreading Gaussian wavepacket:
\psi_t(x) = {1\over \sqrt{a+it/m}} e^{ - {x^2\over 2a} }
produces the moving Gaussian:
\psi_t(x) = {1\over \sqrt{a + it/m}} e^{ - {(x + vt)^2 \over 2a} + i m v x - i {mv^2\over 2} t }
Which spreads in the same way.
Free Propagator
The narrow-width limit of the Gaussian wavepacket solution is the propagator K. For other differential equations, this is sometimes called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K. When a is the infinitesimal quantity \epsilon, the Gaussian initial condition, rescaled so that its integral is one:
\psi_0(x) = {1\over \sqrt{2\pi \epsilon} } e^{-{x^2\over 2\epsilon}}
becomes a delta function, so that its time evolution:
K_t(x) = {1\over \sqrt{2\pi (i t + \epsilon)}} e^{ - x^2 \over 2it+\epsilon }
gives the propagator.
Note that a very narrow initial wavepacket instantly becomes infinitely wide, with a phase which is more rapidly oscillatory at large values of x. This might seem strange--- the solution goes from being concentrated at one point to being everywhere at all later times, but it is a reflection of the momentum uncertainty of a localized particle. Also note that the norm of the wavefunction is infinite, but this is also correct since the square of a delta function is divergent in the same way.
The factor of \epsilon is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that \epsilon becomes zero, K becomes purely oscillatory and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit \scriptstyle \epsilon\rightarrow 0 is to only to be taken after the final state is calculated.
K_t(x,y) = K_t(x-y) = {1\over \sqrt{2\pi it}} e^{-i(x-y)^2 \over 2t}
In the limit when t is small, the propagator converges to a delta function:
\lim_{t\rightarrow 0} K_t(x-y) = \delta(x-y)
but only in the sense of distributions. The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of K is equal to 1 at all times:
\int_x K_t(x) = 1
since this integral is the inner-product of K with the uniform wavefunction. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit \epsilon\rightarrow zero is taken after everything else.
So the propagation kernel is the future time evolution of a delta function, and it is continuous in a sense, it converges to the initial delta function at small times. If the initial wavefunction is an infinitely narrow spike at position x_0:
\psi_0(x) = \delta(x - x_0)
it becomes the oscillatory wave:
\psi_t(x) = {1\over \sqrt{2\pi i t}} e^{ -i (x-x_0) ^2 /2t}
Since every function can be written as a sum of narrow spikes:
\psi_0(x) = \int_y \psi_0(y) \delta(x-y)
the time evolution of every function is determined by the propagation kernel:
\psi_t(x) = \int_y \psi_0(x) {1\over \sqrt{2\pi it}} e^{-i (x-x_0)^2 / 2t}
And this is an alternate way to express the general solution. The interpretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at x_0 times the amplitude that it went from x_0 to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the initial condition.
\psi_t = K * \psi_0
\int_y K(x-y;t)K(y-z;t') = K(x-z;t+t')
Analytic Continuation to Diffusion
The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is random walking, the probability density function at any point satisfies the diffusion equation:
{\partial \over \partial t} \rho = {1\over 2} {\partial \over \partial x^2 } \rho
where the factor of 2, which can be removed by a rescaling either time or space, is only for convenience.
A solution of this equation is the spreading gaussian:
\rho_t(x) = {1\over \sqrt{2\pi t}} e^{-x^2 \over 2t}
and since the integral of \rho_t, is constant, while the width is becoming narrow at small times, this function approaches a delta function at t=0:
\lim_{t\rightarrow 0} \rho_t(x) = \delta(x)
again, only in the sense of distributions, so that
\lim_{t\rightarrow 0} \int_x f(x) \rho_t(x) = f(0)
for any smooth test function f.
K_{t+t'} = K_{t}*K_{t'}
Which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator H:
K_t(x) = e^{-tH}
which is the infinitesimal diffusion operator.
H= -{\nabla^2\over 2}
K_t(x,x') = K_t(x-x')
Translation invariance means that continuous matrix multiplication:
C(x,x'') = \int_{x'} A(x,x')B(x',x'')
is really convolution:
C(\Delta) = C(x-x'') = \int_{x'} A(x-x') B(x'-x'') = \int_{y} A(\Delta-y)B(y)
The exponential can be defined over a range of t's which include complex values, so long as integrals over the propagation kernel stay convergent.
K_z(x) = e^{-zH}
As long as the real part of z is positive, for large values of x K is exponentially decreasing and integrals over K are absolutely convergent.
The limit of this expression for z coming close to the pure imaginary axis is the Schrodinger propagator:
K_t^{\rm Schr} = K_{it+\epsilon} = e^{-(it+\epsilon)H}
and this gives a more conceptual explanation for the time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration:
K_z * K_{z'} = K_{z+z'}
holds for all complex z values where the integrals are absolutely convergent so that the operators are well defined.
So that quantum evolution starting from a Gaussian, which is the diffusion kernel K:
\psi_0(x) = K_a(x) = K_a * \delta(x)
gives the time evolved state:
\psi_t = K_{it} * K_a = K_{a+it}
This explains the diffusive form of the Gaussian solutions:
\psi_t(x) = {1\over \sqrt{2\pi (a+it)} } e^{- {x^2\over 2(a+it)} }
Variational Principle
The variational principle asserts that for any any Hermitian matrix A, the lowest eigenvalue minimizes the quantity:
\langle v,Av \rangle = \sum_{ij} A_{ij} v^*_i v_j
on the unit sphere <v,v>=1. This follows by the method of Lagrange multipliers, at the minimum the gradient of the function is parallel to the gradient of the constraint:
{\partial\over \partial v_i} \langle v,Av\rangle = \lambda {\partial \over \partial v_i} \langle v,v\rangle
which is the eigenvalue condition
\sum_{j} A_{ij} v_j = \lambda v_i
so that the extreme values of a quadratic form A are the eigenvalues of A, and the value of the function at the extreme values is just the corresponding eigenvalue:
\langle v,Av\rangle = \lambda\langle v,v\rangle = \lambda
When the hermitian matrix is the Hamiltonian, the minimum value is the lowest energy level.
In the space of all wavefunctions, the unit sphere is the space of all normalized wavefunctions \psi, the ground state minimizes
\langle \psi | H |\psi \rangle = \int \psi^* H \psi = \int \psi^* (-\nabla^2 + V(x)) \psi
or, after an integration by parts,
\langle \psi | H |\psi \rangle = \int |\nabla \psi|^2 + V(x) |\psi|^2
All the stationary points come in complex conjugate pairs since the integrand is real. Since the stationary points are eigenvalues, any linear combination is a stationary point, and the real and imaginary part are both stationary points.
The lowest energy state has a positive definite wavefunction, because given a \psi which minimizes the integral, |\psi|, the absolute value, is also a minimizer. But this minimizer has sharp corners at places where \psi changes sign, and these sharp corners can be rounded out to reduce the gradient contribution.
Potential and Ground State
For a particle in a positive definite potential, the ground state wavefunction is real and positive, and has a dual interpretation as the probability density for a diffusion process. The analogy between diffusion and nonrelativistic quantum motion, originally discovered and exploited by Schrodinger, has led to many exact solutions.
A positive definite wavefunction:
\psi = e^{-W(x)}
is a solution to the time-independent Schrodinger equation with m=1 and potential:
V(x) = {1\over 2} |\nabla W|^2 - {1\over 2} \nabla^2 W
with zero total energy. W, the logarithm of the ground state wavefunction. The second derivative term is higher order in \scriptstyle \hbar, and ignoring it gives the semi-classical approximation.
The form of the ground state wavefunction is motivated by the observation that the ground state wavefunction is the Boltzmann probability for a different problem, the probability for finding a particle diffusing in space with the free-energy at different points given by W. If the diffusion obeys detailed balance and the diffusion constant is everywhere the same, the Fokker Planck equation for this diffusion is the Schrodinger equation when the time parameter is allowed to be imaginary. This analytic continuation gives the eigenstates a dual interpretation--- either as the energy levels of a quantum system, or the relaxation times for a stochastic equation.
Harmonic Oscillator
Main article: Quantum harmonic oscillator.
W should grow at infinity, so that the wavefunction has a finite integral. The simplest analytic form is:
W(x) = \omega x^2
with an arbitrary constant \omega, which gives the potential:
V(x) = {1\over 2} \omega^2 x^2 - {\omega \over 2}
This potential describes a Harmonic oscillator, with the ground state wavefunction:
\psi(x) = e^{-\omega x^2 }
The total energy is zero, but the potential is shifted by a constant. The ground state energy of the usual unshifted Harmonic oscillator potential:
V(x) = {\omega x^2 \over 2}
is then the additive constant:
E_0 = {a\over 2}
which is the zero point energy of the oscillator.
Coulomb Potential
Another simple but useful form is
W(x) = 2a|x|
where W is proportional to the radial coordinate. This is the ground state for two different potentials, depending on the dimension. In one dimension, the corresponding potential is singular at the origin, where it has some nonzero density:
V(x) = 2a^2 + a \delta(x)
and, up to some rescaling of variables, this is the lowest energy state for a delta function potential, with the bound state energy added on.
V(x) = a \delta(x)
with the ground state energy:
E_0 = - 2a^2
and the ground state wavefunction:
\psi = e^{-2a|x|}
In higher dimensions, the same form gives the potential:
V(x) = 2a^2+ { 2a (d-1) \over r};
which can be identified as the attractive Coulomb law, up to an additive constant which is the ground state energy. This is the superpotential that describes the lowest energy level of the Hydrogen atom, once the mass is restored by dimensional analysis:
\psi_0 = e^{-r/r_0}
where r_0 is the Bohr radius, with energy
E_0 = - {2a\over d-1}
The ansatz
W(x) = a r + b \log(r)
modifies the Coulomb potential to include a quadratic term proportional to 1/r^2, which is useful for nonzero angular momentum.
Operator Formalism
Bra-ket Notation
In the mathematical formulation of quantum mechanics, a physical system is fully described by a vector in a complex Hilbert space, the collection of all possible normalizable wavefunctions. The wavefunction is just an alternate name for the vector of complex amplitudes, and only in the case of a single particle in the position representation is it a wave in the usual sense, a wave in space time. For more complex systems, it is a wave in an enormous space of all possible worlds. Two nonzero vectors which are multiples of each other, two wavefunctions which are the same up to rescaling, represent the same physical state.
The wavefunction vector can be written in several ways:
1. as an abstract ket vector:
2. As a list of complex numbers, the components relative to a discrete list of normalizable basis vectors |\eta_i\rangle:
c_i = \langle \eta_i |\psi \rangle
3. As a continuous superposition of non-normalizable basis vectors, like position states |x\rangle:
|\psi\rangle = \int_x \psi(x) |x\rangle
The divide between the continuous basis and the discrete basis can be bridged by limiting arguments. The two can be formally unified by thinking of each as a measure on the real number line.
In the most abstract notation, the Schrödinger equation is written:
i\hbar {d\over dt} |\psi\rangle = H |\psi\rangle
which only says that the wavefunction evolves linearly in time, and names the linear operator which gives the time derivative the Hamiltonian H. In terms of the discrete list of coefficients:
i\hbar {d\over dt} C_i = \sum_j H_{ij} C_j
which just reaffirms that time evolution is linear, since the Hamiltonian acts by matrix multiplication.
In a continuous representation, the Hamiltonian is a linear operator, which acts by the continuous version of matrix multiplication:
\langle x| i\hbar {d\over dt} |\psi\rangle = \langle x|H|\psi\rangle = \hat{H} \psi (x)
Taking the complex conjugate:
-i\hbar {d\over dt} \langle \psi | = \langle \psi | H^\dagger
In order for the time-evolution to be unitary, to preserve the inner products, the time derivative of the inner product must be zero:
i\hbar {d\over dt} \langle \psi | \psi \rangle = \langle\psi | H - H^\dagger |\psi\rangle = 0
for an arbitrary state |\psi\rangle, which requires that H is Hermitian. In a discrete representation this means that \scriptstyle H_{ij}= H_{ji}. When H is continuous, it should be self-adjoint, which adds some technical requirement that H does not mix up normalizable states with states which violate boundary conditions or which are grossly unnormalizable.
The formal solution of the equation is the matrix exponential ( natural units):
|\psi(t)\rangle = e^{-i H t} |\psi(0)\rangle = U(t) |\psi(0)\rangle
For every time-independent Hamiltonian operator, \hat H, there exists a set of quantum states, \left|\psi_n\right\rang, known as energy eigenstates, and corresponding real numbers E_n satisfying the eigenvalue equation.
H |\psi_n \rangle = E_n |\psi_n \rangle \,
This is the time-independent Schrodinger equation.
For the case of a single particle, the Hamiltonian is the following linear operator ( natural units):
H = -{\nabla^2 \over 2m} + V(x) = {p^2\over 2m} + V(x)
which is a Self-adjoint operators when V is not too singular and does not grow too fast. Self-adjoint operators have the property that their eigenvalues are real in any basis, and their eigenvectors form a complete set, either discrete or continuous.
Expressed in a basis of Eigenvectors of H, the Schrodinger equation becomes trivial:
\mathrm{i} \hbar \frac{\partial}{\partial t} \left| \psi_n \left(t\right) \right\rangle = E_n \left|\psi_n\left(t\right)\right\rang.
Which means that each energy eigenstate is only multiplied by a complex phase:
\left| \psi \left(t\right) \right\rangle = \mathrm{e}^{-\mathrm{i} Et / \hbar} \left|\psi\left(0\right)\right\rang.
Which is what matrix exponentiation means--- the time evolution acts to rotate the eigenfunctions of H.
When H is expressed as a matrix for wavefunctions in a discrete energy basis:
i\hbar {d\over dt} C_i = E_i C_i
so that:
C_n(t) = e^{-iE_n t} C_n(t)
The physical properties of the C's are extracted by acting by operators, matrices. By redefining the basis so that it rotates with time, the matrices become time dependent, which is the Heisenberg picture.
Galilean Invariance
Galilean symmetry requires that H(p) is quadratic in p in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a p-independent phase factor, px - Ht must have a very special form--- translations in p need to be compensated by a shift in H. This is only true when H is quadratic.
The infinitesimal generator of Boosts in both the classical and quantum case is:
B = \sum_i m_i x_i(t) - t \sum_i p_i
where the sum is over the different particles, and B,x,p are vectors.
The poisson bracket/commutator of \scriptstyle B\cdot v with x and p generate infinitesimal boosts, with v the infinitesimal boost velocity vector:
[B\cdot v ,x_i] = vt
[B\cdot v ,p_i] = v m_i
Iterating these relations is simple, since they add a constant amount at each step. By iterating, the dv's incrementally sum up to the finite quantity V:
x \rightarrow x_i + Vt
p \rightarrow p_i + m_i V
B divided by the total mass is the current center of mass position minus the time times the centre of mass velocity:
B = M X_\mathrm{cm} - t P_\mathrm{cm}
In other words, B/M is the current guess for the position that the centre of mass had at time zero.
The statement that B doesn't change with time is the centre of mass theorem. For a Galilean invariant system, the centre of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the centre of mass.
Since B is explicitly time dependent, H does not commute with B, rather:
{dB\over dt} = [H,B] + {\partial B \over \partial t} = 0
this gives the transformation law for H under infinitesimal boosts:
[B\cdot v,H] = - P_\mathrm{cm} v
the interpretation of this formula is that the change in H under an infinitesimal boost is entirely given by the change of the centre of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity.
The two quantities (H,P) form a representation of the Galilean group with central charge M, where only H and P are classical functions on phase-space or quantum mechanical operators, while M is a parameter. The transformation law for infinitesimal v:
P' = P + M v
H' = H - P\dot v
can be iterated as before--- P goes from P to P+MV in infinitesimal increments of v, while H changes at each step by an amount proportional to P, which changes linearly. The final value of H is then changed by the value of P halfway between the starting value and the ending value:
H' = H - (P+{MV\over 2})\cdot V = H - P\cdot V - {MV^2\over 2}.
The factors proportional to the central charge M are the extra wavefunction phases.
Boosts give too much information in the single-particle case, since Galilean symmetry completely determines the motion of a single particle. Given a multi-particle time dependent solution:
with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution:
\psi'_t = \psi_t(x_1 + v t, ..., x_2 + vt) e^{i P_\mathrm{cm}\cdot X_\mathrm{cm} - {Mv_\mathrm{cm}^2\over 2}t}.
For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the centre of mass motion to be ignored.
Retrieved from "ödinger_equation&oldid=228026258" |
1b2934acb2bc5563 | Like information invisibly encoded in strands of DNA, engineered works — bridges, dams, aircraft, disk drives, motors, and so on — are characteristically imprinted with math. With few exceptions, math is the foundation as well as the structure for all technological achievement; and on the wings of computers, it may carry us farther than anyone ever imagined. Take a journey along the evolutionary path of mathematical analysis, and you see not only the past but the future.
From maps to Mars
Mathematical modeling, in one form or another, is behind virtually all technological progress since the beginning of scientific discovery.
Follow the trail of Euclidean geometry, for example. Starting in ancient times and extending through the Renaissance, it has lead to such technological advancements as surveying, mapping, navigation, and astronomy. Granted the tools and methods were primitive by today’s standards, but they allowed man for the first time to explore the extent of his own universe.
The next big leap was the development of tools to model time-variant events. In the 17th century, mathematicians worked out the theory to describe real-world (dynamic) phenomena — motion. The breakthrough came from Newton and Leibnitz, who independently developed differential calculus.
One feat in particular demonstrated the power of the new math. With a relatively simple model, Newton not only reproduced known planetary orbits, he predicted future positions. The accuracy of his celestial mechanics is impressive by any measure and it had broad repercussions, challenging man’s view of the universe and his role in it.
Rather than attempting to describe a planetary orbit as a static curve, Newton’s revolutionary invention of a dynamic model for motion expressed how changes in velocity are related to position. The model, as Leibnitz said it must be, was a set of differential equations expressing the relationship between position and its derivatives.
A stone’s throw
Dynamic equations, while facilitating analysis of complicated systems, presented their own set of challenges. Suppose you want to model the trajectory of a rock thrown through the air. Using Newton’s Law of Motion, and assuming gravity is constant near the earth’s surface, the differential equation describing the stone’s height u as a function of time is:
where t = time, m = mass, and F is the constant force (gravity) acting on the stone. The function
is the second derivative of position with respect to time, or acceleration.
The solution to this equation is typically expressed as a polynomial in t
where the constants a and b must be determined from a known position and velocity corresponding to a given time. In the 18th century, this was about as far as one could take it.
Few dynamic problems are as simple as projectile motion, however. Consider the motion of planets in our solar system. The main influencing force, as Newton pointed out, is interplanetary gravitational pull. For just two planets, the force of gravity is proportional to each planet’s mass and inversely proportional to the square of the separation distance. When you consider the entire solar system, the force on each planet is the sum of the forces imposed by every other planet. Using Newton’s Law of Motion, the position of Earth, Mercury, Jupiter, and every other planet in our solar system is:
where ui indicates the position of planet i, t = time, mi = mass of planet i, and C = Newton’s constant of gravity. Note that ui is a vector with three components, the planet’s x, y, z coordinates.
This equation, although it involves a more complicated force, is somewhat similar to that of the tossed stone. But there are differences that make the solution orders of magnitude more difficult. For instance, you can’t solve for the position of a single planet by itself; you need the position of all other planets to determine the force acting on the one of interest.
More perplexing is the fact that there doesn’t seem to be an analytical solution. Newton showed that if only two planets were in space, they would travel in elliptical, parabolic, or hyperbolic orbits around the center of gravity. Since then, despite enormous research efforts, most mathematical issues related to systems of three or more planets remain unresolved; to this day, no one has found an analytical solution to the planetary equations of motion.
Partial victory
We’ve defined just two problems so far, both on the basis of a single independent variable. Many physical processes, however, involve multiple independent variables. Analyzing phenomena of this sort requires a more advanced form of math known as partial differential equations.
One of the most well-known partial differential equations of all time is the Poisson equation. Developed in the early 19th century, it applies to many things observed in physics including the gravitational field of a planetary system. In the previous example we assumed the masses of the planets to be located at infinitely small points. A more realistic analysis accounts for distributed mass, which is exactly what Poisson’s equation does:
The function u is the gravity potential, g is the gravitational constant, and ρ is the mass density as a function of independent variables x, y, z.
Poisson’s equation also applies to electromagnetics, geophysics, and chemical engineering. Other partial differential equations of note are the heat-transfer equation
which describes heat flow in solid objects, and the wave equation
which expresses wave propagation in acoustics and electromagnetics. In both cases, the function u depends on time t as well as position (x, y, z). The remaining term, f, is a source function that varies with time, position, and the unknown function u.
These and other partial differential equations — including the Schrödinger equation in quantum mechanics, the Navier equation in structural mechanics, and the Navier-Stokes and Euler equations in fluid dynamics — are fundamental to all sciences and have greatly aided the development of modern day technology.
Nature’s curve
Partial differential equations are typically easier to solve if all terms involving the function of interest and its derivatives can be reduced to a linear combination with independent coefficients. If that’s the case, the equation is said to be linear; otherwise it’s nonlinear.
The distinction between linear and nonlinear partial differential equations (PDEs) is crucial when it comes time to solve them. Many linear PDEs — including Poisson’s equation, the heat equation, the wave equation, and the equation describing the tossed stone — reduce to an exact form by proper manipulation. Some of the techniques include separation of variables, superposition, Fourier series, and Laplace transforms.
In contrast, nonlinear PDEs — such as the Navier-Stokes equation and those that express planetary motion — are seldom solvable in analytical form. Instead you must rely on numerical solutions and approximations. Many algorithms for this purpose exist today and are quite accurate even when compared to exact mathematical solutions.
Three centuries ago Leibnitz envisioned as much, predicting the eventual development of a general method that could be used to solve any differential equation.
It was in pursuit of such dreams as this, and the more pressing needs of navigators and astronomers, that early mathematicians looked for easier calculation methods. They devoted centuries to creating tables, notably of logarithms and trigonometric functions, that allowed virtually anyone to quickly and accurately calculate answers to a variety of problems.
The tables, however, were only a stepping stone. Even in his time Leibnitz realized the benefits of automating math with machines. “Knowing the algorithm of this calculus,” he said, “all differential equations that can be solved by a common method, not only addition and subtraction but also multiplication and division, could be accomplished by a suitably arranged machine.”
Continue to Page 2
Crunch time
Devices that “carry out the mechanics of calculations” emerged just like Leibnitz predicted, but the advent of digital computers is what finally put the process of developing computational solutions over the hurdle. In the decades since, researchers have developed many programs that approximate continuous functions using repetitive calculations. The techniques, by necessity, have relied on finesse rather than brute force.
To appreciate the challenge posed by even a simple differential equation, consider the analysis of a diode. Here, the quantities of interest are the electric potential of the valence band and bipolar (electron- hole) current in the p-n junction.
The first step is to model the currentvoltage (I-V) relationship. Expressed in terms of partial differential equations, the I-V response is based on the conservation of charge and the relation between field strength and the concentration of electrons and holes. The full I-V characteristics are obtained by post-processing a family of solutions for a series of applied voltages.
Because there isn’t a nice, neat formula describing all points in the diode at all times, the problem has to be solved by breaking the region of interest into a large number of small cells. For a simple diode, the solution may involve tens of thousands of cells strategically concentrated in regions where the electric field changes rapidly.
Focusing on small elements simplifies calculations, but it also means that all values depend on those of the surrounding cells. What’s more, if the field distribution changes appreciably with applied voltage, it becomes necessary to regenerate all the elements for each voltage step.
Generating and solving thousands upon thousands of interrelated equations like this can take billions of arithmetical operations. Just two decades ago this would have been too much for almost any computational machine, but with computer power doubling every 18 months — by some estimates our problem-solving capacity has improved by a factor of a million over the last 30 years — we now have the ability to solve these types of problems on our “suitably arranged machines,” otherwise known as desktop PCs.
Methods keep pace
Computational power isn’t the only thing improving. Recent developments in numerical methods have also contributed to technological progress.
One of the more notable landmarks in terms of numerical methods was the invention of the fast Fourier transform (FFT) in the beginning of the 1960s. FFTs have slashed, by orders of magnitude, the time needed to decompose an arbitrary waveform into its constituent frequencies. They have literally changed our view about digital signal processing, and they’re at the center of innumerable algorithms for robotics, imaging, vibration control, and error correction.
Although there’s no inherent link between the frequency domain and differential calculus, the two seem to complement each other quite well. Indeed, with FFTs and differential math we can now quantitatively describe phenomena as varied as electron clouds, clusters of galaxies, and membrane deflection. Furthermore, FFTs are at the heart of fast Poisson solvers, which make it possible, for example, to run simulations that account for millions of galaxies. With such power, scientists can now examine various scenarios for the evolution of the universe, running their simulations in fast-forward and reverse.
Trying triangles
In more recent years, another numerical method has been enlisted in solving differential equations — the finite element method. It first appeared about 50 years ago as the standard tool for structural analysis. Since then it has evolved into a technology with general application.
To see what the finite element method brings to PDEs, consider the deflection of a thin membrane subject to normal loads. The first step in the solution is to divide the region into a set of triangles. Triangles are useful for approximating irregular domains and they address the need of numeric methods to discretize the continuous world.
With finite elements, the solution simplifies to determining the position of the corners of each triangle. Think of the loaded membrane as a set of plane triangular facets joined along their edges, forming a dome. The general mathematical approach is to compute the corner positions, or heights, that make the dome closest to the exact deflection shape. This is obviously less taxing than trying to come up with an analytical solution.
Moreover, the procedure is quite systematic — generate elements, compute solution, and display results — so it lends itself to computer implementation. Graphically oriented operating environments and off-the-shelf application software only make the process easier.
Moving to multiphysics
As interesting as the previous examples may seem, the only physical phenomenon they account for is motion. Nature, however, is more complicated that. As soon as any system starts to move, it begins to heat. If you want to consider dynamics, therefore, you need to think about heat transfer. And since heat transfer usually influences other physical properties — electrical conductivity, chemical reaction rate, magnetic reluctance — you’ve got a lot of work ahead of you if it’s accuracy you’re after.
A computational approach that simultaneously accounts for all the phenomena related to an event is called multiphysics. This, too, is possible with today’s computer hardware and software.
To appreciate the need for a multiphysics approach to numerical analysis, imagine trying to model the stresses in a cracked heatexchange tube like you might find in a pulp mill. The tubes consist of two concentric layers of different stainless steels. Assume the culprit is a flaw in the joint between the inner and outer tubes.
A one-dimensional model will show that the flaw impedes heat flow, setting up a temperature difference that creates thermal stresses. With the addition of a dynamic modeler, one can also show that the stresses propagate the crack along the interface, causing the temperature difference to increase even more. The interaction of heat transfer and mechanics, and its cumulative effect, is by no means limited to heat exchangers, suggesting that multiphysics may one day be the standard computational approach.
Svante Littmarck is President of Comsol Inc., Burlington, Mass., and one of the founders of parent company Comsol AB.
Special thanks to: Dr. Lars Langemyr of Comsol AB, Prof. Jesper Oppelstrup of The Royal Institute of Technology in Stockholm, Sweden, and Paul Schreier, consultant. |
d4c09fa75a34eab8 | Density functional theory
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals, i.e. functions of another function, which in this case is the spatially dependent electron density. Hence the name density functional theory comes from the use of functionals of the electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry.
DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some other strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors.[1] Its incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms)[2] or where dispersion competes significantly with other effects (e.g. in biomolecules).[3] The development of new DFT methods designed to overcome this problem, by alterations to the functional[4] or by the inclusion of additive terms,[5][6][7][8] is a current research topic.
Overview of method[edit]
Although density functional theory has its conceptual roots in the Thomas–Fermi model, DFT was put on a firm theoretical footing by the two Hohenberg–Kohn theorems (H–K).[9] The original H–K theorems held only for non-degenerate ground states in the absence of a magnetic field, although they have since been generalized to encompass these.[10][11]
The first H–K theorem demonstrates that the ground state properties of a many-electron system are uniquely determined by an electron density that depends on only 3 spatial coordinates. It lays the groundwork for reducing the many-body problem of N electrons with 3N spatial coordinates to 3 spatial coordinates, through the use of functionals of the electron density. This theorem can be extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.
The second H–K theorem defines an energy functional for the system and proves that the correct ground state electron density minimizes this energy functional.
Within the framework of Kohn–Sham DFT (KS DFT), the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of non-interacting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve as the wavefunction can be represented as a Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The exchange-correlation part of the total-energy functional remains unknown and must be approximated.
Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original H-K theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the non-interacting system.
Derivation and formalism[edit]
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential V in which the electrons are moving. A stationary electronic state is then described by a wavefunction satisfying the many-electron time-independent Schrödinger equation
where, for the -electron system, is the Hamiltonian, is the total energy, is the kinetic energy, is the potential energy from the external field due to positively charged nuclei, and is the electron-electron interaction energy. The operators and are called universal operators as they are the same for any -electron system, while is system dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term .
There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.
Here DFT provides an appealing alternative, being much more versatile as it provides a way to systematically map the many-body problem, with , onto a single-body problem without . In DFT the key variable is the electron density which for a normalized is given by
This relation can be reversed, i.e., for a given ground-state density it is possible, in principle, to calculate the corresponding ground-state wavefunction . In other words, is a unique functional of ,[9]
and consequently the ground-state expectation value of an observable is also a functional of
In particular, the ground-state energy is a functional of
where the contribution of the external potential can be written explicitly in terms of the ground-state density
More generally, the contribution of the external potential can be written explicitly in terms of the density ,
The functionals and are called universal functionals, while is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified , one then has to minimize the functional
with respect to , assuming one has got reliable expressions for and . A successful minimization of the energy functional will yield the ground-state density and thus all other ground-state observables.
The variational problems of minimizing the energy functional can be solved by applying the Lagrangian method of undetermined multipliers.[12] First, one considers an energy functional that doesn't explicitly have an electron-electron interaction energy term,
where denotes the kinetic energy operator and is an external effective potential in which the particles are moving, so that .
Thus, one can solve the so-called Kohn–Sham equations of this auxiliary non-interacting system,
which yields the orbitals that reproduce the density of the original many-body system
The effective single-particle potential can be written in more detail as
where the second term denotes the so-called Hartree term describing the electron-electron Coulomb repulsion, while the last term is called the exchange-correlation potential. Here, includes all the many-particle interactions. Since the Hartree term and depend on , which depends on the , which in turn depend on , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for , then calculates the corresponding and solves the Kohn–Sham equations for the . From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.
NOTE1: The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. contains kinds of singularities, cuts and branches. This may indicate a limitation of our hope for representing exchange-correlation functional in a simple analytic form.
NOTE2: It is possible to extend the DFT idea to the case of Green function instead of the density . It is called as Luttinger–Ward functional (or kinds of similar functionals), written as . However, is determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical difficulties.
NOTE3: There is no one-to-one correspondence between one-body density matrix and the one-body potential . (Remember that all the eigenvalues of is unity). In other words, it ends up with a theory similar as the Hartree-Fock (or hybrid) theory.
Relativistic Density Functional Theory (explicit functional forms)[edit]
The same theorems can be proven in the case of relativistic electrons thereby providing generalization of DFT for relativistic case. Unlike nonrelativistic theory in relativistic case it's possible to derive a few exact and explicit formulas for relativistic density functional.
Let’s consider an electron in hydrogen-like ion obeying relativistic Dirac equation. Hamiltonian for relativistic electron moving in the Coulomb potential can be chosen in the following form (atomic units are used):
where is Coulomb potential of a point-like nucleus, is a momentum operator of electron, , and are electron electric charge, mass and speed of light in vacuum constants respectively, and finally and are set of Dirac matrixes:
, .
To find out eigen functions and corresponding energies one solves the eigen function equation:
where is a four component wave function and is associated eigen energy. It is demonstrated in the article [13] that application of the virial theorem to eigen function equation produces the following formula for eigen energy of any bound state:
and analogously the virial theorem applied to the eigen function equation with squared Hamiltonian [14] (see also references therein) yields:
It's easy to see that both written above formulas represent density functionals. The former formula can be easily generalized for multi-electron case.[15]
Approximations (exchange-correlation functionals)[edit]
The major problem with DFT is that the exact functionals for exchange and correlation are not known except for the free electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately.[16] In physics the most widely used approximation is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:
The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin:
Highly accurate formulae for the exchange-correlation energy density have been constructed from quantum Monte Carlo simulations of jellium.[17]
The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to over-estimate the exchange-correlation energy.[18] To correct for this tendency, it is common to expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron density. This allows for corrections based on the changes in density away from the coordinate. These expansions are referred to as generalized gradient approximations (GGA)[19][20][21] and have the following form:
Using the latter (GGA), very good results for molecular geometries and ground-state energies have been achieved.
Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian) whereas GGA includes only the density and its first derivative in the exchange-correlation potential.
Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density.
Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals.
Generalizations to include magnetic fields[edit]
The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt,[11] the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris,[22] the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally. Recently an extension by Pan and Sahni[23] extended the Hohenberg-Kohn theorem for non constant magnetic fields using the density and the current density as fundamental variables.
C60 with isosurface of ground-state electron density as calculated with DFT.
In general, density functional theory finds increasingly broad application in the chemical and material sciences for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for the study of systems to synthesis and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behaviour in dilute magnetic semiconductor materials and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors.[24][25] Also, it has been shown that DFT has a good results in the prediction of sensitivity of some nanostructures to environment pollutants like SO2[26] or Acrolein[27] as well as prediction of mechanical properties.[28]
In practice, Kohn–Sham theory can be applied in several distinct ways depending on what is being investigated. In solid state calculations, the local density approximations are still commonly used along with plane wave basis sets, as an electron gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange-correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron gas approximation, however, they must reduce to LDA in the electron gas limit. Among physicists, probably the most widely used functional is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized-gradient parametrization of the free electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a 'training set' of molecules. Unfortunately, although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). Hence in the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments.
Thomas–Fermi model[edit]
The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Thomas and Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every of volume.[29] For each element of coordinate space volume we can fill out a sphere of momentum space up to the Fermi momentum [30]
Equating the number of electrons in coordinate space to that in phase space gives:
Solving for and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:
As such, they were able to calculate the energy of an atom using this kinetic energy functional combined with the classical expressions for the nuclear-electron and electron-electron interactions (which can both also be represented in terms of the electron density).
Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange energy functional was added by Dirac in 1928.
However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.
Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic energy functional.
The kinetic energy functional can be improved by adding the Weizsäcker (1935) correction:[31][32]
Hohenberg–Kohn theorems[edit]
The Hohenberg-Kohn theorems relate to any system consisting of electrons moving under the influence of an external potential.
Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density.
If two systems of electrons, one trapped in a potential and the other in , have the same ground-state density then necessarily .
Corollary: the ground state density uniquely determines the potential and thus all properties of the system, including the many-body wave function. In particular, the "HK" functional, defined as is a universal functional of the density (not depending explicitly on the external potential).
Theorem 2. The functional that delivers the ground state energy of the system, gives the lowest energy if and only if the input density is the true ground state density.
For any positive integer and potential , a density functional exists such that obtains its minimal value at the ground-state density of electrons in the potential . The minimal value of is then the ground state energy of this system.
The many electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 50's.
Ab initio Pseudo-potentials
A crucial step toward more realistic pseudo-potentials was given by Topp and Hopfield[33] and more recently Cronin, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo wave-functions to coincide with the true valence wave functions beyond a certain distance . The pseudo wave-functions are also forced to have the same norm as the true valence wave-functions and can be written as
where is the radial part of the wavefunction with angular momentum , and and denote, respectively, the pseudo wave-function and the true (all-electron) wave-function. The index n in the true wave-functions denotes the valence level. The distance beyond which the true and the pseudo wave-functions are equal, , is also -dependent.
Electron smearing[edit]
The electrons of system will occupy the lowest Kohn-Sham eigenstates up to a given energy level according to the Aufbau principle. This corresponds to the step-like Fermi-Dirac distribution at absolute zero. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. One way of damping these oscillations is to smear the electrons, i.e. allowing fractional occupancies.[34] One approach of doing this is to assign a finite temperature to the electron Fermi-Dirac distribution. Other ways is to assign a cumulative Gaussian distribution of the electrons or using a Methfessel-Paxton method.[35][36]
Software supporting DFT[edit]
DFT is supported by many Quantum chemistry and solid state physics software packages, often along with other methods.
See also[edit]
1. ^ Assadi, M.H.N; et al. (2013). "Theoretical study on copper's energetics and magnetism in TiO2 polymorphs". Journal of Applied Physics. 113 (23): 233913. arXiv:1304.1854Freely accessible. Bibcode:2013JAP...113w3913A. doi:10.1063/1.4811539.
2. ^ Van Mourik, Tanja; Gdanitz, Robert J. (2002). "A critical note on density functional theory studies on rare-gas dimers". Journal of Chemical Physics. 116 (22): 9620–9623. Bibcode:2002JChPh.116.9620V. doi:10.1063/1.1476010.
3. ^ Vondrášek, Jiří; Bendová, Lada; Klusák, Vojtěch; Hobza, Pavel (2005). "Unexpectedly strong energy stabilization inside the hydrophobic core of small protein rubredoxin mediated by aromatic residues: correlated ab initio quantum chemical calculations". Journal of the American Chemical Society. 127 (8): 2615–2619. doi:10.1021/ja044607h. PMID 15725017.
4. ^ Grimme, Stefan (2006). "Semiempirical hybrid density functional with perturbative second-order correlation". Journal of Chemical Physics. 124 (3): 034108. Bibcode:2006JChPh.124c4108G. doi:10.1063/1.2148954. PMID 16438568.
5. ^ Zimmerli, Urs; Parrinello, Michele; Koumoutsakos, Petros (2004). "Dispersion corrections to density functionals for water aromatic interactions". Journal of Chemical Physics. 120 (6): 2693–2699. Bibcode:2004JChPh.120.2693Z. doi:10.1063/1.1637034. PMID 15268413.
6. ^ Grimme, Stefan (2004). "Accurate description of van der Waals complexes by density functional theory including empirical corrections". Journal of Computational Chemistry. 25 (12): 1463–1473. doi:10.1002/jcc.20078. PMID 15224390.
7. ^ Von Lilienfeld, O. Anatole; Tavernelli, Ivano; Rothlisberger, Ursula; Sebastiani, Daniel (2004). "Optimization of effective atom centered potentials for London dispersion forces in density functional theory". Physical Review Letters. 93 (15): 153004. Bibcode:2004PhRvL..93o3004V. doi:10.1103/PhysRevLett.93.153004. PMID 15524874.
8. ^ Tkatchenko, Alexandre; Scheffler, Matthias (2009). "Accurate Molecular Van Der Waals Interactions from Ground-State Electron Density and Free-Atom Reference Data". Physical Review Letters. 102 (7): 073005. Bibcode:2009PhRvL.102g3005T. doi:10.1103/PhysRevLett.102.073005. PMID 19257665.
9. ^ a b Hohenberg, Pierre; Walter Kohn (1964). "Inhomogeneous electron gas". Physical Review. 136 (3B): B864–B871. Bibcode:1964PhRv..136..864H. doi:10.1103/PhysRev.136.B864.
10. ^ Levy, Mel (1979). "Universal variational functionals of electron densities, first-order density matrices, and natural spin-orbitals and solution of the v-representability problem". Proceedings of the National Academy of Sciences. United States National Academy of Sciences. 76 (12): 6062–6065. Bibcode:1979PNAS...76.6062L. doi:10.1073/pnas.76.12.6062.
11. ^ a b Vignale, G.; Mark Rasolt (1987). "Density-functional theory in strong magnetic fields". Physical Review Letters. American Physical Society. 59 (20): 2360–2363. Bibcode:1987PhRvL..59.2360V. doi:10.1103/PhysRevLett.59.2360. PMID 10035523.
12. ^ Kohn, W.; Sham, L. J. (1965). "Self-consistent equations including exchange and correlation effects". Physical Review. 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K. doi:10.1103/PhysRev.140.A1133.
13. ^ M. Brack (1983), "Virial theorems for relativistic spin-½ and spin-0 particles", Phys. Rev. D, 27: 1950, doi:10.1103/physrevd.27.1950
14. ^ K. Koshelev (2015). "About density functional theory interpretation". arXiv:0812.2919Freely accessible [quant-ph].
15. ^ K. Koshelev (2007). "Alpha variation problem and q-factor definition". arXiv:0707.1146Freely accessible [physics.atom-ph].
16. ^ Kieron Burke; Lucas O. Wagner (2013). "DFT in a nutshell". International Journal of Quantum Chemistry. 113 (2): 96. doi:10.1002/qua.24259.
17. ^ John P. Perdew; Adrienn Ruzsinszky; Jianmin Tao; Viktor N. Staroverov; Gustavo Scuseria; Gábor I. Csonka (2005). "Prescriptions for the design and selection of density functional approximations: More constraint satisfaction with fewer fits". Journal of Chemical Physics. 123 (6): 062201. Bibcode:2005JChPh.123f2201P. doi:10.1063/1.1904565. PMID 16122287.
18. ^ Becke, Axel D. (2014-05-14). "Perspective: Fifty years of density-functional theory in chemical physics". The Journal of Chemical Physics. 140 (18): 18A301. Bibcode:2014JChPh.140rA301B. doi:10.1063/1.4869598. ISSN 0021-9606. PMID 24832308.
19. ^ Perdew, John P; Chevary, J A; Vosko, S H; Jackson, Koblar, A; Pederson, Mark R; Singh, D J; Fiolhais, Carlos (1992). "Atoms, molecules, solids, and surfaces: Applications of the generalized gradient approximation for exchange and correlation". Physical Review B. 46 (11): 6671. Bibcode:1992PhRvB..46.6671P. doi:10.1103/physrevb.46.6671.
20. ^ Becke, Axel D (1988). "Density-functional exchange-energy approximation with correct asymptotic behavior". Physical Review A. 38 (6): 3098. Bibcode:1988PhRvA..38.3098B. doi:10.1103/physreva.38.3098. PMID 9900728.
21. ^ Langreth, David C; Mehl, M J (1983). "Beyond the local-density approximation in calculations of ground-state electronic properties". Physical Review B. 28 (4): 1809. Bibcode:1983PhRvB..28.1809L. doi:10.1103/physrevb.28.1809.
22. ^ Grayce, Christopher; Robert Harris (1994). "Magnetic-field density-functional theory". Physical Review A. 50 (4): 3089–3095. Bibcode:1994PhRvA..50.3089G. doi:10.1103/PhysRevA.50.3089. PMID 9911249.
23. ^ Viraht, Xiao-Yin (2012). "Hohenberg-Kohn theorem including electron spin". Physical Review A. 86 (4): 042502. Bibcode:2012PhRvA..86d2502P. doi:10.1103/physreva.86.042502.
24. ^ Segall, M.D.; Lindan, P.J (2002). "First-principles simulation: ideas, illustrations and the CASTEP code". Journal of Physics: Condensed Matter. 14 (11): 2717. Bibcode:2002JPCM...14.2717S. doi:10.1088/0953-8984/14/11/301.
25. ^ Hanaor, Dorian A. H.; Assadi, Mohammed H. N.; Li, Sean; Yu, Aibing; Sorrell, Charles C. (2012). "Ab initio study of phase stability in doped TiO2". Computational Mechanics. 50 (2): 185–194. doi:10.1007/s00466-012-0728-4.
26. ^ Somayeh. F. Rastegar, Hamed Soleymanabadi (2014-01-01). "Theoretical investigation on the selective detection of SO2 molecule by AlN nanosheets". Journal of Molecular Modeling. 20 (9). doi:10.1007/s00894-014-2439-6.
27. ^ Somayeh F. Rastegar, Hamed Soleymanabadi (2013-01-01). "DFT studies of acrolein molecule adsorption on pristine and Al- doped graphenes". Journal of Molecular Modeling. 19 (9): 3733–40. doi:10.1007/s00894-013-1898-5. PMID 23793719.
28. ^ Music, D.; Geyer, R.W.; Schneider, J.M. (2016). "Recent progress and new directions in density functional theory based design of hard coatings". Surface & Coatings Technology. 286: 178. doi:10.1016/j.surfcoat.2015.12.021.
29. ^ (Parr & Yang 1989, p. 47)
30. ^ March, N. H. (1992). Electron Density Theory of Atoms and Molecules. Academic Press. p. 24. ISBN 0-12-470525-1.
31. ^ Weizsäcker, C. F. v. (1935). "Zur Theorie der Kernmassen". Zeitschrift für Physik. 96 (7–8): 431–58. Bibcode:1935ZPhy...96..431W. doi:10.1007/BF01337700.
32. ^ (Parr & Yang 1989, p. 127)
33. ^ Topp, William C.; Hopfield, John J. (1973-02-15). "Chemically Motivated Pseudopotential for Sodium". Physical Review B. 7 (4): 1295–1303. Bibcode:1973PhRvB...7.1295T. doi:10.1103/PhysRevB.7.1295.
34. ^ Michelini, M. C.; Pis Diez, R.; Jubert, A. H. (25 June 1998). "A Density Functional Study of Small Nickel Clusters". International Journal of Quantum Chemistry. 70 (4–5): 694. doi:10.1002/(SICI)1097-461X(1998)70:4/5<693::AID-QUA15>3.0.CO;2-3. Retrieved 21 October 2016.
35. ^ "Finite temperature approaches -- smearing methods". VASP the GUIDE. Retrieved 21 October 2016.
36. ^ Tong, Lianheng. "Methfessel-Paxton Approximation to Step Function". Metal CONQUEST. Retrieved 21 October 2016.
Key papers[edit]
External links[edit] |
6265fbe547d8b534 | (21st Century Studies) Richard Grusin-The Nonhuman Turn-Univ of Minnesota Press (2015)
February 20, 2018 | Author: Anonymous slVH85zY | Category: Digital & Social Media, Social Media, Materialism, Affect (Psychology), Science
Share Embed Donate
Short Description
the non-human turn...
The Nonhuman Turn
Center for 21st Century Studies Richard Grusin, Series Editor
0 The
Nonhuman Turn Richard Grusin, Editor Center for 21st Century Studies
University of Minnesota Press Minneapolis • London
An earlier version of chapter 6 was published as Wendy Hui Kyong Chun, “Crisis, Crisis, Crisis, or Sovereignty and Networks,” Theory, Culture, and Society 28, no. 6 (2011): 91–112; reprinted by permission of SAGE Publications, Ltd., London, Los Angeles, New Delhi, Singapore, and Washington, D.C.; copyright 2011 Theory, Culture, and Society, SAGE Publications. An earlier version of chapter 9 was published as Jane Bennett, “Systems and Things: A Response to Graham Harman and Timothy Morton,” New Literary History 43, no. 2 (Spring 2012): 225–33; copyright 2012 New Literary History, University of Virginia; reprinted with permission of the Johns Hopkins University Press. Copyright 2015 by the Board of Regents of the University of Wisconsin System All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Published by the University of Minnesota Press 111 Third Avenue South, Suite 290 Minneapolis, MN 55401-2520 http://www.upress.umn.edu Library of Congress Cataloging-in-Publication Data The nonhuman turn / Richard Grusin, editor, Center for 21st Century Studies. (21st century studies) Includes bibliographical references and index. ISBN 978-0-8166-9466-2 (hc : alk. paper) ISBN 978-0-8166-9467-9 (pb : alk. paper) 1. Panpsychism—Congresses. 2. Consciousness—Congresses. I. Grusin, Richard A., editor. BD560.N66 2015 141—dc23 2014019922 Printed in the United States of America on acid-free paper The University of Minnesota is an equal-opportunity educator and employer. 21 20 19 18 17 16 15
10 9 8 7 6 5 4 3 2 1
Contents Introductionvii Richard Grusin 1 The Supernormal Animal Brian Massumi
2 Consequences of Panpsychism Steven Shaviro
3 Artfulness Erin Manning
4 The Aesthetics of Philosophical Carpentry Ian Bogost
5 Our Predictive Condition; or, Prediction in the Wild Mark B. N. Hansen
6 Crisis, Crisis, Crisis; or, The Temporality of Networks Wendy Hui Kyong Chun
7 They Are Here Timothy Morton
8 Form / Matter / Chora: Object-Oriented Ontology and Feminist New Materialism Rebekah Sheldon
9 Systems and Things: On Vital Materialism and Object-Oriented Philosophy Jane Bennett
Acknowledgments241 Contributors243 Index245
This page intentionally left blank
Introduction Richard Grusin
This book seeks to name, characterize, and therefore to consoli date a wide variety of recent and current critical, theoretical, and philosophical approaches to the humanities and social sciences. Each of these approaches, and the nonhuman turn more generally, is engaged in decentering the human in favor of a turn toward and concern for the nonhuman, understood variously in terms of animals, affectivity, bodies, organic and geophysical systems, materiality, or technologies. The conference from which this book emerged, hosted by the Center for 21st Century Studies (C21) at the University of Wisconsin-Milwaukee, was organized to explore how the nonhuman turn might provide a way forward for the arts, humanities, and social sciences in light of the difficult challenges of the twenty-first century. To address the nonhuman turn, a group of scholars were brought in to help lay out some of the research emphases and methodologies that are key to the emerging interdisciplinary field of twenty-first century studies. Given that almost every problem of note that we face in the twenty-first century entails engagement with nonhumans—from climate change, drought, and famine; to biotechnology, intellectual property, and privacy; to genocide, terrorism, and war—there seems no time like the present to turn our future attention, resources, and energy toward the nonhuman broadly understood. Even the new paradigm of the Anthropocene, which names the human as the dominant influence on climate since industrialism, participates in the nonhuman turn in its recognition that humans must now be understood as climatological or geological forces on the planet that operate just as nonhumans would, independent of human will, belief, or desires. 9 vii 0
Richard Grusin
The ubiquity of nonhuman matters of concern in the twenty- first century should not obscure the fact that concern for the nonhuman has a long Western genealogy, with examples at least as far back as Lucretius’s De Rerum Naturae and its subsequent uptake in the early modern period.1 The concern with nonhumans is not new in Anglo-American thought. In American literature, for example, we can trace this concern back at least to Emerson, Thoreau, Melville, Dickinson, and Whitman. The nonhuman turn gained even more powerful impetus in the nineteenth century from Charles Darwin’s insistence on seeing human and nonhuman species as operating according to the same laws of natural selection and William James’s radical contention in The Principles of Psychology that human thought, emotion, habit, and will were all inseparable from, and often consequent upon, nonhuman, bodily material processes.2 The nonhuman turn in twenty-first century studies can be traced to a variety of different intellectual and theoretical developments from the last decades of the twentieth century: • Actor-network theory, particularly Bruno Latour’s career- long project to articulate technical mediation, nonhuman agency, and the politics of things • Affect theory, both in its philosophical and psychological manifestations and as it has been mobilized by queer theory • Animal studies, as developed in the work of Donna Hara way and others, projects for animal rights, and a more general critique of speciesism • The assemblage theory of Gilles Deleuze, Manuel De Landa, Latour, and others • New brain sciences like neuroscience, cognitive science, and artificial intelligence • The new materialism in feminism, philosophy, and Marxism • New media theory, especially as it has paid close attention to technical networks, material interfaces, and computational analysis • Varieties of speculative realism including object-oriented philosophy, neovitalism, and panpsychism
• Systems theory, in its social, technical, and ecological manifestations Such varied analytical and theoretical formations obviously diverge and disagree in many of their assumptions, objects, and methodologies. But they are all of a piece in taking up aspects of the nonhuman as critical to the future of twenty-first century studies in the arts, humanities, and social sciences. To put forth the concept of the nonhuman turn to name the diverse and baggy set of interrelated critical and theoretical method ologies that have coalesced at the beginning of the twenty-first century is to invite the expression of what can only be called “turn fatigue”: the weariness (and wariness) of describing every new development in the humanities and social sciences as a turn. Organizing a conference called “The Nonhuman Turn” ran the risk of placing it as just the latest in a series of well-known academic turns that have already been named and discussed for decades. This complaint is not without its merits; indeed I return to it at the end of the introduction, where I reclaim the idea of a turn as itself nonhuman. Singling out the nonhuman turn among other recent turns, however, also runs the risk of inviting confusion with the post human turn, despite the very different stakes in these two relatively recent theoretical developments. Unlike the posthuman turn with which it is often confused, the nonhuman turn does not make a claim about teleology or progress in which we begin with the human and see a transformation from the human to the posthuman, after or beyond the human. Although the best work on the posthuman seeks to avoid such teleology, even these works oscillate between seeing the posthuman as a new stage in human development and seeing it as calling attention to the insepa rability of human and nonhuman. Nonetheless, the very idea of the posthuman entails a historical development from human to something after the human, even as it invokes the imbrication of human and nonhuman in making up the posthuman turn.3 The nonhuman turn, on the other hand, insists (to paraphrase Latour) that “we have never been human” but that the human has always coevolved, coexisted, or collaborated with the nonhuman—and
Richard Grusin
that the human is characterized precisely by this indistinction from the nonhuman. Brian Massumi offers a “Simondian ethics of becoming” as a counter to the assertion that the human is entering “a next ‘posthuman’ phase,” arguing that Gilbert Simondon shows us the presence of “the nonhuman at the ‘dephased’ heart of every individuation, human and otherwise.”4 Simondon’s argument for the coevolution of humans and technology has been restated from a different perspective by philosopher and cognitive scientist Andy Clark, who characterizes humans as “natural-born cyborgs” in order to underscore how (at least since the invention of language) human cognition has been interdependent with embodied, nonhuman technologies.5 To name a conference “The Nonhuman Turn in 21st Century Studies,” then, was first and foremost to make a historical claim about a certain set of not always entirely compatible developments in academic discourse in the humanities and social sciences at the end of the twentieth and beginning of the twenty-first centuries.6 Intended as a macroscopic concept, the nonhuman turn is meant to account for the simultaneous or overlapping emergence of a number of different theoretical or critical “turns”—for example, the ontological, network, neurological, affective, digital, ecological, or evolutionary. As something of a theoretical or methodological assemblage, the nonhuman turn tries to make sense of what holds these various other “turns” together, even while allowing for their divergent theoretical and methodological commitments and contradictions. Each of these different elements of the nonhuman turn derives from theoretical movements that argue (in one way or another) against human exceptionalism, expressed most often in the form of conceptual or rhetorical dualisms that separate the human from the nonhuman—variously conceived as animals, plants, organisms, climatic systems, technologies, or ecosystems. Among the features that loosely link these turns is that they were all opposed, in one way or another, to the more linguistic or representational turns of the 1970s through 1990s—such as the textual, cultural, ideological, or aesthetic turns—in a rough adherence to what Nigel Thrift has characterized as “non-representational theory.” Similarly, these very different methodologies shared a resistance to the privileged status of the autonomous male subject of the Western
liberal tradition, especially in their refusal of such fundamental logical oppositions as human/nonhuman and subject/object. For practitioners of the nonhuman turn, what is problematic about these dualisms is their insistent privileging of the human. The critique of social constructivism for stripping the nonhuman world of agency or inherent meaning or qualities has been widespread. Perhaps most powerfully, the nonhuman turn challenges some of the key assumptions of social constructivism, particularly insofar as it insists that the agency, meaning, and value of nature all derive from cultural, social, or ideological inscription or construction. Massumi points to the methodological dominance of constructivism in the last decades of the twentieth century in explaining why Simondon’s insistence on the co-construction or ontogenesis of human and nonhuman has only begun to find an audience in the twenty-first century. Constructivism proved extremely valuable, Massumi says, for its emphasis on “becoming,” its insistence that things can’t be taken as givens, rather they come to be. . . . But the constructivism of the period played out in ways that radically diverge from the direction [Simondon] indicates. What was considered to come into being was less things than new social or cultural takes on them. What is constructed are fundamentally perspectives or paradigms, and the corresponding subject positions. Within the 1990s constructivist model these were understood in terms of signifying structures or coding, typically applying models derived from linguistics and rhetoric. This telescoped becoming onto the human plane. At the same time, it reduced the constitution of the human plane to the question of the human subject (if not its effective construction, then the impossibility of it, or if not exactly that, its subversion). A vicious circle results.7 Practitioners of the nonhuman turn find problematic the emphasis of constructivism on the social or cultural constructions of the human subject because, taken to its logical extreme, it strips the world of any ontological or agential status. The epistemological
Richard Grusin
focus that informed much of the work of constructivism actively discouraged any discussion of ontology, by refusing to grant agency to nonhuman nature, organisms, or technologies. Among the most recent and arguably most visible critiques of constructivism has been speculative realism, particularly its critique of Kantian correlationism, the belief that knowledge of the nonhuman world had to be correlated with or mediated by a priori human categories.8 Throughout this book speculative realism and object-oriented ontology appear both explicitly and implicitly in relation to the nonhuman turn. Indeed, when the call for papers for the conference first circulated among various social media, it was initially taken as another conference on speculative realism or object-oriented ontology, generating a good deal of social media buzz on Facebook and Twitter.9 But the nonhuman turn was at work long before anyone had ever heard of speculative realism or object-oriented ontology, which from a historical perspective are relative latecomers (albeit interesting ones) to this concern with the nonhuman. The Nonhuman Turn conference (along with this book) was organized not principally to make a claim about philosophical ontology but to sketch out an important methodological or conceptual development in late twentieth-century and early twenty-first-century academic scholarship. Although the current heightened pace of academic discourse often obscures the fact, long-standing institutional formations such as the humanities and social sciences make wide and quite slow turns, more like one of the large freighters on Lake Michigan visible through the windows of the Center for 21st Century Studies than like a Twitter stream or viral video. This slow pace of change can often be hard to see in the new “digital academy” of the twenty-first century, especially for graduate students and younger scholars, for whom even blogs can sometimes seem like a slow and archaic form of academic communication. Nonetheless, the editors of The Speculative Turn, which presented the first comprehensive collection of essays on speculative realism, contend that in addition to its philosophical innovations, one of the self- proclaimed medialogical innovations of speculative realism was its deployment of blogs and social media to develop and circulate
key texts and arguments informing the newly emergent discourse of speculative realism, object-oriented ontology, and its variants: As any of the blogosphere’s participants can attest, it can be a tremendously productive forum for debate and experimentation. The less formal nature of the medium facilitates immediate reactions to research, with authors presenting ideas in their initial stages of development, ideally providing a demystifying sort of transparency. The markedly egalitarian nature of blogs (open to non-PhDs in a way that faculty positions are not) opens a space for collaboration amongst a diverse group of readers, helping to shape ideas along unforeseen paths. The rapid rhythm of online existence also makes a stark contrast with the long waiting-periods typical of refereed journals and mainstream publishers. Instant reaction to current events, reading groups quickly mobilizing around newly published works, and cross-blog dialogues on specific issues, are common events in the online world.10 One does not have to be as enthusiastic as The Speculative Turn’s editors about how blogs and social media accelerate the pace and intensity of academic discourse to recognize the increased role of Facebook, Twitter, and other social media in twenty-first-century academic mediality. The machinic temporality of contemporary scholarship, and the intensity created by all forms of socially networked media, exemplify how the nonhuman has altered the modes and rhythms and style of academic discourse in the twenty- first century. An important consequence of digital media technology and its nonhuman impact on academia and the humanities has been an intensification of time. It is necessary, therefore, to focus not only on the impact of our new technical media on the human, but on the nonhuman speeding up (and multiplication) of “academicky” discourse on e-mail, Facebook, Twitter, blogs, Tumblr, and other social media formats. Although a 2012 study noting that 51 percent of Internet traffic was already nonhuman garnered a good deal of media attention as marking a significant milestone in the history of the Internet, the nonhuman turn lets us recognize
Richard Grusin
that Internet traffic is always both human and nonhuman. Technical mediation itself needs to be understood as a nonhuman process within which or through which humans and nonhumans relate. Because socially networked media transactions multiply and quicken with or without human intervention—in marked contrast to the more reserved and deliberate interactions enabled by a technical medium like print—new technical formats have accompanied and fostered a speeding up of academic transactions through digital media, requiring twenty-first-century academics to spend significant research time online, hence generating new categories of academic work. As opposed to an earlier generation of graduate education, where research in the humanities and social sciences most often meant time working alone at home, in libraries, or cafés, much of the digital academic work of younger scholars is spent online in social media networks, archives, or databases. These online media interactions don’t replace embodied academic interactions but intensify them. Invited talks, conferences, and symposia like the Nonhuman Turn, or research labs, centers, and institutes like C21—all are much more prevalent in today’s globally networked academic world than they were thirty years ago. The accelerated nonhuman rhythm of socially networked communication and exchange can serve as well to intensify the affective tones of academic debate and disagreement in ways that can be exciting and generative, but that can often lead to misunderstandings or an inflated investment in or evaluation of the present and very near future, at the expense of the achievements of the past. The immediacy of our online academic environment both generates and is generated by the specialization or multiplication of audiences and online communities, which accelerates the pace of academic discourse within social networks of like-thinking individuals with shared interests. In part because of the numerous microcommunities that make up the nonhuman turn, neither the conference nor this book could possibly do justice to all of the diverse elements that make up the nonhuman turn in twenty-first century studies. The particular emphases of this book are undoubtedly influenced by my own “nonhuman turn,” which began with the discovery of the Derridean trace or the maxim that il n’y a pas de hors-texte. In the Berkeley English department of the early
1980s, the recognition that literature or “the text” does not have a special ontological status, coupled with Foucault’s genealogies of disciplinary apparatuses and techniques, helped to generate the “new historicism”—itself an attempt to turn critical attention toward discourses of the nonhuman, like medicine, economics, technology, or science. For the new historicism, however, language was still at the center, even if literary language had (in the best such work) been decentered or put on a level playing field with other nonliterary discourses. These methodological limitations led me to the nonhuman turn through three distinct approaches, which feature prominently in this book: science and technology studies (STS), particularly actor-network theory (ANT); new media theory; and affect theory. The Nonhuman Turn would have looked very different without the publication of two formative works of critical STS, Donna Haraway’s “Manifesto for Cyborgs” in 1985, and the English publication of Bruno Latour’s Science in Action in 1987.11 Haraway offered the ironic figure of the cyborg, a hybrid of human and non human, as a potential replacement for the human-centered figures of “woman” and “labor” that underlay the political critiques that late t wentieth-century socialist-feminism often directed at informatics and technoscience. Haraway’s controversial conclusion that she would “rather be a cyborg than a goddess” was meant to counter a conception of woman that excluded the nonhuman from its purview. And for Latour, the distribution of agency across a hetero geneous network of human and nonhuman actants exploded the fundamental distinction between human science and nonhuman nature that underwrote much of the social science, history, and philosophy of science for much of the twentieth century. Science in Action was a crucial text in spurring actor-network theory to think intensively about the distribution of agency among human and nonhuman actors. Both of these works were instrumental in catalyzing the emerging interdisciplinary field of science and technology studies, particularly in the insistence on the need to account for the distributed or cyborg agency of humans and nonhumans in technoscientific practice. Together, STS and ANT provided an important counter to the almost religious rhetoric of cyberspace and virtual reality that
Richard Grusin
played a large role in critical cyberculture studies and early new media theory. In the enthusiasm of the emergence of cyberculture in the ’80s and ’90s, Latour’s actor-network model provided a theory of translation with which to understand the transformative relations among textual and material mediations, human and non human actors. Coupled with the insights of earlier nonhuman theo rists like Walter Benjamin and Marshall McLuhan, actor-network theory helped in the development of theories like remediation, which took up the nonhuman implications of new media technologies, as well as media logics like immediacy, hypermediacy, and Lev Manovich’s database aesthetic. Digital media function as material objects within the world, following Latour’s distinction between intermediaries and mediators, in which mediators are not neutral means of transmission but are actively involved in transforming whatever they mediate. Digital media operate through what Latour characterizes as translation, not by neutrally reproducing meaning or information but by actively transforming human and non human actants, as well as their conceptual and affective states. By the end of the twentieth century, science studies and new media theory each offered different but largely compatible ways to incorporate a concern for nonhumans into the humanities, to account for the agency of nonhuman actors, events, and mediation. In 1995, ten years after Haraway published her “Manifesto for Cyborgs,” two key essays in the field of affect theory were published: Eve Sedgwick and Adam Frank’s “Shame in the Cybernetic Fold: Reading Silvan Tomkins,” and Brian Massumi’s “The Autonomy of Affect.”12 Sedgwick came to Tomkins and affect through her concerns with queer theory and paranoid and reparative reading, Massumi largely through his analysis and translation of Deleuze and Guattari. But despite their different accounts of and deployment of affectivity, Sedgwick and Massumi (through Tomkins and Whitehead, respectively) shared an intellectual ancestor in William James, whose monumental, two-volume Principles of Psychology remains a foundational work for affect psychology in the twenty-first century. Sedgwick and Massumi each argued that the representational or ideological thinking that had dominated cultural studies in the ’80s and ’90s needed to make way for modes of thought that paid attention aesthetically or politically to
questions of embodied and autonomous affect. For Tomkins, and Sedgwick’s reading of him, affect is always object-oriented. There is no affectivity without an object, even if that object is another affect. For Massumi, affect was not reducible to linguistic, symbolic, or conceptual meanings but operated as an intensity moving through human and nonhuman bodies alike. For both Sedgwick and Massumi, affect provided a way to reassert the ontological agency of the natural or the nonhuman against what Sedgwick powerfully characterizes as “paranoid reading,” in which any ascription of agency, inherent qualities, or causality to nature was seen as politically reactionary. Taken together, Sedgwick and Massumi helped to generate a critical-theoretical turn to affect—particularly but not exclusively in the emergence of a new materialism in feminist theory—that not only highlights the nonhumanness of embodied affectivity, but also provides a model to think about the affectivity of both animate and inanimate nonhumans.13 Although it is not always readily apparent how human affect can be nonhuman, I take affect theory to be particularly important for the nonhuman turn in two respects. First, it is crucial to affect theory, both from the James-Lange theory of emotion to Silvan Tomkins and Daniel Stern, and from Spinoza, Bergson, and Whitehead to Deleuze and Guattari, that human affect systems are somatic and bodily. Affect systems operate autonomously and automatically, independent of (and according to Massumi, prior to) cognition, emotion, will, desire, purpose, intention, or belief—all conventional attributes of the traditional liberal humanist subject. Second, it is also the case that affectivity belongs to nonhuman animals as well as to nonhuman plants or inanimate objects, technical or natural. Tomkins, for example, distinguishes the cat’s site-specific affect system and its purr from the greater freedom of sites or avenues for humans to maximize affective happiness. But crucially both cat and human share affect systems, suggesting that affectivity is not limited to the human. And as Deleuze notes in Cinema 1, affectivity is a quality of things as well as people. And why is expression not available to things? There are affects of things. The “edge,” the “blade,” or rather the “point”
Richard Grusin
of Jack the Ripper’s knife, is no less an affect than the fear which overcomes his features and the resignation which finally seizes hold of the whole of his face. The Stoics showed that things themselves were bearers of ideal events which did not exactly coincide with their properties, their actions and reactions: the edge of a knife.14 A key challenge to the idea of affect as nonhuman comes from fields like trauma theory, which sometimes appeals to affect theory to make sense of the traumatic emotional and affective experiences of those who had been subject to objectification and dehumanization. Considered more broadly, the nonhuman turn often invokes resistance or opposition from participants in liberatory scholarly projects—for example, feminist critiques of sexual violence, critical race studies, or holocaust studies—which work precisely against the objectification of the human, its transformation into a nonhuman object or thing that can be bought and sold, ordered to work and punished, incarcerated and even killed. For scholars working on such politically liberatory projects, who have labored so hard to rescue or protect the human from dehumanization or objectification, the nonhuman turn can seem regressive, reactionary, or worse, particularly if it is identified solely with the turn to objects as fundamental elements of ontology. Motivated partly by social constructivism, many practitioners of politically liberatory scholarship share the belief that any appeal to nature, for example, as possessing causality or agential force, could only operate in service of a defense of the status quo rather than fostering a more capacious sense of becoming and construction than social constructivism imagined. But this does not have to be the case. A concern with the nonhuman can and must be brought to bear on any projects for creating a more just society. If following Latour and others we take society as a complex assemblage of human and nonhuman actors—not as an autonomous entity or realm that can be appealed to in order to explain why things are as they are, or that can be somehow changed apart from changing the way things are—then the question of political or social change becomes a question of changing our relations not only to other humans but to nonhumans as well. To extend our academic and critical concern
to include nonhuman animals and the nonhuman environment, which had previously been excluded or ignored from critical or scholarly humanistic concern, should be a politically liberatory project in very much the same way that earlier, similar turns toward a concern for gender, race, ethnicity, or class were politically liberatory for groups of humans. Having sketched out a very partial genealogy of the nonhuman turn, I conclude with a brief look at an even longer genealogy—the etymology and changing definitions of the word turn—as a way to return to the question of “turn fatigue.” While it is true that critical “turns” have proliferated in the past few decades almost as a form of academic branding, the idea of a “nonhuman” turn provides a different perspective on what it means to name an intellectual movement a “turn.” In fact, if we take a look at the definitions of the word turn as presented in the Oxford English Dictionary (OED), we can see that nonhuman materiality and movement have been part of the meaning of the word from its inception.15 Originating in fifteenth- century Middle English, with roots back through Anglo-Norman to Latin and Greek, turn is used in English as an action noun involved with nonhuman movement and change. The OED divides its various meanings under five main headings. The first meaning of turn, as “rotation,” is tied to the physical technology of the wheel and entails nonhuman movement around an axis or central point, as in the turns of the hand of a clock or the phrase “as the world turns.” “Change of direction or course,” the second sense of turn, describes physical movement or change without the idea of rotating or revolving around a fixed center, as when a river turns around a bend or a rider turns his horse in a certain direction. The third meaning, “change in general,” drops the sense of turn as physical movement and applies it to moments of transition, as in the turn of the season, the year, or the century. The fourth sense, which groups instances of turn as “actions of various kinds,” includes the affectivity of turn in phrases like “bad turn” or “evil turn” or in sayings like “One good turn deserves another.” The fifth sense of turn as “occasion” operates temporally—referring not to change or movement in space but to the movement of action through time. In this sense turn describes behavior that fosters (or counters) collectivity, especially as turn refers to the time an action comes around to an
Richard Grusin
individual, or when one fulfills one’s obligation to serve—as when one takes one’s turn or when one’s turn comes around, or conversely when one acts or speaks out of turn. In this interesting sense of the word, it is agency or action, not wheels or rivers, that rotates among individuals or changes course or direction. Describing the nonhuman turn as a shift of attention, interest, or concern toward nonhumans keeps in mind the physicality and movement involved in the idea of a turn, how the nonhuman turn must be understood as an embodied turn toward the nonhuman world, including the nonhumanness that is in all of us. Rehearsing these various senses of the word turn lets me defend and reclaim its use to account for the change of direction or course in t wenty-first century studies toward a concern with nonhumans. This nonhuman turn could be said to mark in one sense the rotation or revolutions of academic fashion. But in another sense this turn could also help to provoke a fundamental change of circumstances in the humanities in the twenty-first century. Insofar as a turn is an action, movement, or change, it also functions as a means of translation or mediation in the Latourian sense, indeed as a means of remediation or premediation. A turn is invariably oriented toward the future. Even a turn back is an attempt to turn the future around, to prevent a future that lies ahead. Throughout I have referred to the nonhuman turn as a concept. But in thinking of it as a movement of embodied thought, a rotation or shift of attention toward nonhumanness, I also underscore the way in which the nonhuman turn operates on what Deleuze and Guattari call “the plane of immanence.” Most crucially, then, the embodied sense of a turn as a movement should be understood as what Deleuze and Guattari call a “turning toward”: “truth can only be defined on the plane by a ‘turning toward’ or by ‘that to which thought turns.’ ” In distinguishing the plane of immanence from the verticality of the concept, they emphasize repeatedly the sense of embodied movement in turning toward, and also emphasize that turning toward something is “the movement of thought toward truth” and the turning toward truth of thought. “To turn toward does not imply merely to turn away but to confront, to lose one’s way, to move aside.”16 To turn toward the nonhuman is not only to confront the nonhuman but to lose the traditional way of
the human, to move aside so that other nonhumans—animate and less animate—can make their way, turn toward movement themselves. I hope that this book, if not the nonhuman turn itself, might in some small way mark the occasion for a turn of fortune, an intensified concern for the nonhuman that might catalyze a change in our circumstances, a turn for the better not for the worse, in which everyone who wants to participate, human and nonhuman alike, will get their turn. Remediating the plenary addresses from the 2012 Nonhuman Turn in 21st Century Studies conference, the chapters collected in this book cover only small swaths of the scholarly terrain of the nonhuman turn. The conference’s selection of plenary speakers was, as is true of any conference, in part a product of accident and contingency—including the availability of invited speakers, limitations of funding, and the critical climate in which the conference occurred. As I have already suggested, the concerns of this book are shaped in part by the intensity of interest in speculative realism and object-oriented ontology that was near its zenith at the time of the Nonhuman Turn conference. But as I have also suggested, the conference was not explicitly organized to address speculative realism but to explore more broadly the late twentieth-and early twenty-first-century turn toward the nonhuman. Although invited speakers were not asked explicitly to address the question of the nonhuman turn, all of their chapters exemplify that turn and some reflect on it explicitly. To frame the collection I introduce each chapter in terms of its relation to the nonhuman turn, trying in the process to draw some loose connections among them. The book opens with Brian Massumi’s chapter because Massumi has been a practitioner and theorist of the nonhuman turn more extensively and for a longer period of time than any of the book’s other contributors, and because his thought has been crucial for my own involvement with the nonhuman turn. “The Supernormal Animal” takes as its starting point a critique of the utilitarian account of instinct as a quality that has been deployed historically to distinguish humans from nonhuman animals. Building upon an insightful reading of Niko Tinbergen’s analysis of the supposedly instinctive behavior of herring gulls, Massumi develops a concept of the “supernormal” to account not only for the unpredictability
Richard Grusin
of such supposedly automatic behaviors as feeding but also for the animality of desire and excess that generate such quintessentially human activities as art making. Near the end of the chapter, Massumi glosses the relation between animality and humanity: “Take it to heart: animal becoming is most human. It is in becoming animal that the human recurs to what is nonhuman at the heart of what moves it. This makes it surpassingly human. Creative- relationally more-than human.” It is this more-than-humanness of the human that Massumi alludes to in the concept of “the supernormal animal.” Steven Shaviro also takes up the continuity of humans and nonhumans in his chapter on “Consequences of Panpsychism.” Shaviro’s argument for panpsychism is directed squarely at some of the key debates in speculative realism and object-oriented ontology, unlike Massumi, who does not take up these debates explicitly (even while offering an ontogenetic account of objects that implicitly counters the insistence by Graham Harman and some of his followers that objects are ontologically originary).17 Running the risk of taking seriously the claims of panpsychism, which is often dismissed as something like a new age fantasy, Shaviro traces out its philosophi cal pedigree “from the pre-Socratics, on through Baruch Spinoza and Gottfried Wilhelm von Leibniz, and down to William James and Alfred North Whitehead.” Shaviro dismisses the question of consciousness in favor of a more limited idea of sentience, developing an account of the continuity of human and nonhuman minds and values. After a discussion of Rudy Rucker’s playful 2007 science fiction story “Panpsychism Proved,” Shaviro unfolds an account of panpsychism built upon three key sources: Thomas Nagel’s 1974 article “What Is It Like to Be a Bat?”; Wittgenstein’s philosophical investigations of inner sensations and other minds; and Galen Strawson’s 2006 defense of panpsychism. The payoff for Shaviro, or one of them, is to find fault with one of the key points of Harman’s object-oriented ontology, the claim that all objects are in the final analysis “withdrawn” from access. Invoking Whitehead’s more processual ontology, Shaviro insists “that the inner and outer, or private and public, aspects of an entity always go together,” but that we have no more cognitive access to human thought or sensation than we do to that of nonhumans.
Erin Manning joins Shaviro and Massumi in refusing any cate gorical distinctions between human and nonhuman. Manning cites David Lapoujade’s assertion, “At the heart of the human is nothing human” (also cited in Massumi’s piece) as a way to take up the question of “artfulness,” particularly the intuitive artfulness through which the relationality of the world is activated and made available to humans and nonhumans alike. At the heart of Manning’s chapter is what might be called a kind of philosophical case study, the participatory exhibition Stitching Time, which she installed at the 2012 Sydney Biennale. Manning takes her experience in creating and managing the exhibit for the three weeks of the Biennale as an opportunity to think about a variety of interesting concerns—artfulness, time, sympathy, intuition, contemplation— many of which initially seem to belong to the human, not the nonhuman, subject. She addresses this concern in the penultimate section of the chapter, explaining that “Artfulness is always more than human”: Despite my focus on human participation here, the art of participation does not find its conduit solely in the human. Art also does its work without human intervention, activating fields of relation that are environmental or ecological in scales of intermixings that may include the human but don’t depend on it. How to categorize as human or nonhuman the exuberance of an effect of light, the way the air moves through a space, or the way one artwork catches another in its movement of thought. This is surely the force of curation: its choreographic capacity to bring to life the lingering nonhuman tendencies that bridge fields activated by distinct artistic processes. Like Manning, Ian Bogost takes up the relations among art and philosophy in “The Aesthetics of Philosophical Carpentry.” And like Manning, he draws upon his own creative practice—in Bogost’s case as a writer and game designer—to articulate an aesthetics of nonhuman philosophy. Like Manning, but in a very different style and to different ends, Bogost’s prose both exemplifies and describes an aesthetic ethics of what he calls “carpentry,”
Richard Grusin
which focuses less on relationality or process and more on objects and things. Bogost’s piece takes up the question of what it means to write philosophy, to deliver an academic paper, to make an argument. He draws upon Graham Harman and Alphonso Lingis in developing his idea of philosophical carpentry, which is meant to underscore the materiality of writing, or lecturing, or computing, or gaming, or participating in online social media. But carpentry is meant also to assert the ontological and agential status of objects or things in making their and our world. Most of all, however, Bogost makes philosophical carpentry (and the textual objects it constructs) seem like great fun. He eschews (or playfully remediates) academic argumentation in favor of an autobiographical, whimsical romp through the geographical and cultural landscape of Milwaukee and the digital landscape of such whimsical (and intentionally pointless) games like his Cow Clicker or La Molleindustria’s The McDonald’s Videogame. More than any piece in the collection, Bogost’s chapter captures the flavor of an academic conference, of the Nonhuman Turn conference itself.18 In “Our Predictive Condition; or, Prediction in the Wild,” Mark B. N. Hansen takes up the role of prediction in our contemporary historical moment. Hansen is concerned with the technical, computational, or medial nonhuman, specifically the way in which the administration of President Barack Obama has deployed a logic and politics of prediction that replaces the ideological “unknown unknown” of the George W. Bush administration’s logic of preemption with the analysis of imminent threats. Hansen’s chapter adds to and modifies the related concepts of preemption and premediation. Unlike what he sees as the event-driven focus of Massumi’s account of preemption or my own account of premediation, Hansen understands “prediction in the wild” as operating independently of “human understanding and affectivity”: “The discoveries of predictive analytics are discoveries of micrological propensities that are not directly correlated with human understanding and affectivity and that do not cohere into clearly identifiable events: such propensities simply have no direct aesthetic analog within human experience.” Because the predictive logic of the Obama administration depends upon probabilistic analysis of possible future security threats made visible only through the
nonhuman analysis of data, Hansen sees the logic of prediction to be based on reality rather than the allegory or fantasy of the Bush administration’s doctrine of preemption. Hansen cites the current television series Person of Interest and the predictive news service Recorded Future as exemplary of the current “predictive condition” of the twenty-first century. Wendy Hui Kyong Chun sees our contemporary new media condition as one of crisis, not of prediction, providing an instructive counterpoint to Hansen’s piece. In “Crisis, Crisis, Crisis; or, The Temporality of Networks,” Chun deconstructs the contemporary mythos of computer code as logos, arguing that crisis both exceeds and is structurally necessary to networks. In part because both hardware and software operate according to a nonhuman, machinic temporality, network technologies must be repeatedly cared for in response to (and to prevent) crises that would disrupt connectivity. Stability and continuity on the network are the proof not of its permanence but of its fragility. Chun cites Friedrich Kittler to declaim the fact that human agency in computing has been surrendered to “data-driven programming,” much as Hansen underscores the nonhuman technical analysis of data to predict the future. But for Chun what is most interesting about predictive data analyses like climate models is what she calls their “hopefully effective deferral of the future: these predictive models are produced so that, if they are persuasive and thus convince us to cut back on our carbon emissions, what they predict will not come about. Their predictions will not be true or verifiable.” This “mode of deferring a future for another future” characterized by Chun would also be true of the Obama administration’s data-mining analysis of security threats that Hansen describes—if these predictions prove persuasive in the present and the imminent threats are eliminated, then “what they predict will not come about.” Chun sees this “gap between the future and future predictions” not as “reason for dismissal” but for “hope.” In “They Are Here,” Timothy Morton provides a rambunctious and sustained reading of the music video that Toni Basil made for the Talking Heads’ song “Crosseyed and Painless.” Morton takes on the potentially controversial task of linking the objectification and dehumanization experienced by African American youth in urban
Richard Grusin
Southern California with their involvement with nonhuman technologies, by way of object-oriented ontology’s erasure of the human/ nonhuman divide. Morton links the oppression of the urban black community not with its history of slavery but with such factors as environmental and economic racism. As such, he argues through an extended reading of the Crosseyed and Painless video that “the story of race, the story of environment, the story of things, are intertwined.” Deploying a discussion of the broken tool in Heidegger, by way of the quantum operation of the cathode ray tube, Morton explains a key moment in the video as revealing the continuities between human and nonhuman sentience, of people and things. In “Form / Matter / Chora: Object-Oriented Ontology and Femi nist New Materialism,” Rebekah Sheldon takes up a question implicit in Morton’s chapter: What does an ontologically informed critical reading of an aesthetic artifact look like? Choratic reading, Sheldon suggests, might be a way to make sense of texts as assemblages, to account for “the agency of human and nonhuman bodies, organic and nonorganic vitalities, discourses and the specific material apparatuses those discourses are.” Beginning with the welcome observation that speculative realism has in some sense been anticipated by feminist materialism, Sheldon provides a cogent explanation of both the feminist and the speculative realist critiques of correlationism. She convincingly shows how the feminist critique of correlationism invokes the agency of matter while the object-oriented critique focuses on the formal features of objects, citing in support Harman’s claim that object-oriented ontology is “ ‘the first materialism in history to deny the existence of matter.’ ” Sheldon invokes Jane Bennett’s account of vital materialism in Vibrant Matter as an example of how to acknowledge both the persistence of things and their relationality. She ends her chapter by calling for a practice-based model of “choratic reading,” which draws on a highly theorized conceptualization of the chora as generating “an ontology of material-affective circulation” that preserves both the independence of objects from human correlation and their relationality. More directly than any of the chapters in this book, Jane Bennett takes up the significance of the nonhuman turn in the opening paragraphs of “Systems and Things: On Vital Materialism and
Object-Oriented Philosophy.” She offers the theory of “vital materialism” both as a supplement to historical materialism and as a corrective to some of the key assumptions of object-oriented ontology, particularly its insistence on rejecting relationality. Bennett sees the nonhuman turn as a continuation of a long-standing philosophical project: “The nonhuman turn, then, can be understood as a continuation of earlier attempts to depict a world populated not by active subjects and passive objects but by lively and essentially interactive materials, by bodies human and nonhuman.” Bennett places this sense of the vitality of objects against the claim made by Graham Harman that all objects withdraw from relations, as well as against Morton’s concept of “hyperobjects.” She offers her own theory of assemblages as a way to hold on to objects and relational ity both, arguing for the importance of relationality for the nonhuman turn. To underscore the liberatory tendencies of the nonhuman turn referred to earlier in this introduction, Bennett ends her chapter (and the book) with a brief discussion of the historical materialism of literary texts as “special bodies,” which “might help us live more sustainably, with less violence toward a variety of bodies,” human and nonhuman alike. Although she is the last author to take her turn in this book, Bennett’s chapter is by no means the final word on the nonhuman turn. Like the intense and animated conversations sparked by the conference from which this book is drawn, the chapters in this book should work to open the possibility for others to take their turn at making sense of the increasingly complex interrelations among humans and nonhumans both before and after the twenty-first century.
Notes 1. For a discussion of the revitalization of Lucretius in the early modern period, see Stephen Greenblatt, The Swerve: How the World Became Modern (New York: Norton, 2012). 2. William James, The Principles of Psychology (Cambridge, Mass.: Harvard University Press, 1890; rpt. 1983). 3. The most complex treatments of the posthuman, each of which exemplifies the two different senses of the term, include N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999); Cary Wolfe,
Richard Grusin
What Is Posthumanism? (Minneapolis: University of Minnesota Press, 2009); and Rosi Braidotti, The Posthuman (Cambridge: Polity, 2013). 4. Brian Massumi, “ ‘Technical Mentality’ Revisited: Brian Massumi on Gilbert Simondon,” interview with Arne De Boever, Alex Murray, and Jon Roffe, Parrhesia 7 (2009): 45. 5. Andy Clark, Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence (New York: Oxford University Press, 2003). 6. Perhaps because of this diversity and incompatibility, as late as the summer of 2011, when our conference on the nonhuman turn was first imagined, there were no substantive instances of this phrase to be found on Google, and only a handful of passing and qualified references to the phrase. This discovery was quite surprising, particularly because it seemed so clear to me that this “turn” had been well underway for some decades. 7. Massumi, “Technical Mentality,” 37. 8. See Rebekah Sheldon’s chapter in this book for an excellent description of the critique of correlationism by speculative realism and object-oriented ontology. 9. On November 29, 2011, the day that the initial call for papers (CFP) for C21’s conference on the Nonhuman Turn in 21st Century Studies was circulated among various social media, a lively and at moments testy series of exchanges broke out in the Facebook comment streams of Alexander Galloway and McKenzie Wark, each of whom had graciously shared the CFP on their Facebook walls. The lively comments manifested two main concerns. First there was some general expression of “turn” fatigue, which I take up later in the introduction. But more intense, and initially more perplexing, were the majority of comments that assumed that a conference on the Nonhuman Turn in 21st Century Studies was ipso facto a conference on the burgeoning school of thought called “speculative realism” or “object-oriented ontology.” What made these comments a matter of concern was that this debate was taking place on the Facebook walls of two prominent New York–based media and political theorists, underneath the CFP for C21’s conference on the nonhuman turn, with the tacit implication that this was a conference devoted to, or advocating, recent developments in object-oriented ontology. One reason for this assumption was undoubtedly the (unintentional) resemblance between the name of the conference and the title of a recent collection of essays called The Speculative Turn. Another reason for this assumption, as I soon realized, was that the CFP for the Nonhuman Turn conference reactivated some prior debates from the third New York Object-Oriented Ontology symposium in September 2011, which featured presentations by several of the speakers at the C21 conference, including Jane Bennett, McKenzie
Wark, Tim Morton, Steve Shaviro, Aaron Pedinotti, and Ian Bogost. Although social media are global in their scope, they do not completely do away with regionalism; in light of the New York locations of Galloway and Wark, the identification of our CFP with object-oriented ontology was not altogether unreasonable. 10. Levi Bryant, Nick Srnicek, and Graham Harman, “Towards a Speculative Philosophy,” in The Speculative Turn: Continental Materialism and Realism, ed. Bryant, Srnicek, and Harman (Melbourne: re.press, 2011), 6–7. 11. Donna Haraway, “A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980’s,” Socialist Review 2 (1985): 65–107; and Bruno Latour, Science in Action: How to Follow Engineers and Scientists through Society (Cambridge, Mass.: Harvard University Press, 1987). 12. Eve Kosofsky Sedgwick and Adam Frank, “Shame in the Cybernetic Fold: Reading Silvan Tomkins,” in Shame and Its Sisters: A Silvan Tomkins Reader, ed. Sedgwick and Frank (Durham, N.C.: Duke University Press, 1995), 1–35; and Brian Massumi, “The Autonomy of Affect,” Cultural Critique 31 (1995): 83–109. 13. Jonathan Flatley provides a helpful overview of the various twentieth- and twenty-first-century variations of affect theory in Affective Mapping: Melancholia and the Politics of Modernism (Cambridge, Mass.: Harvard University Press, 2008), 11–27. For a useful survey of recent work in affect theory, see Melissa Gregg and Gregory J. Seigworth, The Affect Theory Reader (Durham, N.C.: Duke University Press, 2010). 14. Gilles Deleuze, Cinema 1: The Movement-Image, trans. Hugh Tomlinson and Barbara Habberjam (Minneapolis: University of Minnesota Press, 1986), 118. 15. Oxford English Dictionary, 2nd ed., s.v., “turn.” 16. Gilles Deleuze and Félix Guattari, What Is Philosophy?, trans. Graham Burchell and Hugh Tomlinson (New York: Columbia University Press, 1994), 38. 17. For further discussion of the relations between Massumi and Harman, see Richard Grusin, “Reading Semblance and Event,” Postmodern Culture 21, no. 3 (2011), http://muse.jhu.edu/. 18. What is most interesting or ironic about the close relationship between Bogost’s chapter and his conference presentation is that he was the only one of the plenary speakers to deny us permission to video stream and archive his talk on the C21 YouTube site. In a sense, the style of the chapter remediates the conference presentation so effectively that it stands in for the absent video record.
This page intentionally left blank
9 1 0
The Supernormal Animal Brian Massumi Instinct is sympathy. If this sympathy could extend its object and also reflect upon itself, it would give us the key to vital operations. —Henri Bergson, Creative Evolution
The athletic grace of the pounce of the lynx. The architectural feats of the savanna termite. The complex weave of the orb spider’s web. We admire these accomplishments as marvels of the natural world. The wonder resides as much in the automatic nature of these animal accomplishments as in their summum of technical perfection. Instinct: an innate condensation of ancestral wisdom passed from generation to generation, acquired through random mutation, retained through adaptive selection, unfolding with such regularity and efficiency as to rival the most skilled of human artisans (and in the case of certain social animals, apt to put the most well- oiled human bureaucracy to shame). Standard stimulus, normative output. Signal, triggering, performance, following one another in lockstep, with no second thoughts and without fail. Pure mechanism, all the more trustworthy for being unreflective. Instinct: the instrumentality of intelligence wrapped into reflex. So masterful it is in its functionality that it gives luster to utility. The productive beauty of the hive. The automatic aesthetics of adaptive stereotype. Such is instinct . . . by repute. But instinct has always been hard pressed to live up to its reputation. From the very first systematic investigations dedicated to it by the nascent science of ethology, it has betrayed a most disconcerting tendency. The same drive that so naturally leads it through to its normative accomplishments seems to push it, just as naturally, to overshoot its target. Instinct seems called upon, from within its very own movement, following its own 9 1 0
Brian Massumi
momentum, to outdo itself. Its instrumentality envelops an impulse to excess. This suggests a very different natural aesthetic—or a different nature of the aesthetic—than the beauty of utility. It is not for nothing that for thinkers such as Gilles Deleuze, Félix Guattari, Raymond Ruyer, and Étienne Souriau the theory of the animal is bound to the theory of art. The link between animality to artfulness necessitates a reevaluation of the neo-Darwinian notion that selective adaptation, consolidated by instinct, is the sole motor of evolution. Deleuze and Guattari replace adaptive evolution under pressure of selection with the concept of becoming as the pilot concept for the theory of the animal. Becoming is taken in the strongest sense, of emergence. “Can this becoming, this emergence, be called Art?”1 Called “art,” the formative movement of animal life is no longer analyzable exclusively in terms of adaptation and selection. Another name is called for: expression. In what way do the animal and the human, each in its own right, as well as one in relation to the other, participate in this expressive becoming? Together in what natural animal “sympathy”? If this natural sympathy is moved by an instinctive tendency to outdo itself in expressive excess over its own norms, what does this say about the nature of the animal? The nature of nature? And if this sympathy could extend its object and also reflect upon itself, providing the key to vital operations, what would prevent us from finding, at the self-extending core, something that could only be described as a primary consciousness? A germinal consciousness flush with life’s continuance? Might it be instinct that, against all expectations, obliges us to say, with Raymond Ruyer, that “consciousness and life are one”? That “morphogenesis is always consciousness”?2 A consciousness flush with the unfolding genesis of forms of life: a creatively lived dynamic abstraction. Abstraction? How is that animal? How animal is that? It is only possible here to stage a first move in the direction of these questions, through a replaying of instinct.3 It was Niko Tinbergen, one of the pioneers of ethology, who first noticed that all was not right in the automatism of instinct. Tinbergen was researching the instinctive behavior of the herring gull.4 A red spot on the female beak serves as signal or “trigger” for feeding behavior. It attracts the peck of the chick. The execution
The Supernormal Animal
of the peck intensifies the chick’s begging behavior and triggers the corresponding behavior in the adult, namely the regurgitation of the menu. Tinbergen was interested in knowing what precise perceptual quality constituted the trigger. He built decoy gull beaks presenting variable characteristics in an attempt to isolate which characteristics were essential to triggering the instinctive behaviors. His method was guided by four assumptions: (1) the signal as such is a discrete stimulus (the colored spot); (2) the stimulus stands out in its discreteness against a supporting background, the form of which is imprinted in the young gull as an innate schema (the geometry of the adult’s beak and head); (3) the hungry chick’s response follows mechanically from the appearance of the spot against the background of an actual shape formally resembling the schema; and (4) the response is a sequence of purely automatic actions operating as a reflex. The assumed mechanism was the triggering of an automatism by an instinctive recognition following a principle of formal resemblance. What Tinbergen found was quite different. Against expectations, the decoys most resembling actual seagull beaks and heads exerted the least force of attraction. The most “natural,” or naturalistic, forms left something to be desired. Tinbergen decided to push the experiments further by extending the range of variation of the presented forms “beyond the limits of the normal object.” This included decoys that “did not look like a good imitation of a herring gull’s bill at all.”5 Certain decoys exhibiting a noticeable deficit of resemblance were among the most effective of all. Tinbergen himself was forced to recognize something: that instinct displays an inherent tendency to snub good form and overshoot the limits of the normal in the direction of what he dubbed “supernormal stimuli.” The question then became, Precisely what perceptual qualities press beyond the normal? There was a strong correlation between the color red and the triggering of feeding behavior. The absence of red, however, did not necessarily block the instinctive activity. A spot of another color—even black or gray—could do the job, provided that there was high enough contrast between the spot and its background. High-contrast red proved the surest signal. But the fact that black
Brian Massumi
or gray could also do the job meant that the effect didn’t hinge on a discrete color quality, not even red. What the effectiveness of the presentation hinged on, Tinbergen concluded, was an intensification effect, in this case produced by the relation of contrast. The color red exerted a supernormal force of attraction to the extent that it lent itself to this intensification relation. Gull chicks may have a predilection for red, but even where red was present, there was supernormal pressure toward forcing red into an intensifying relation of immediate proximity with another color, the more contrastive the better. It was this proximity of differences in quality, this qualitative neighborhood, that dynamized the force of attraction, pushing the instinct to surpass what had been assumed to be its natural target. The term supernormal thus does not connote a simple opposition between what is normal and what is not, or between the natural and the artificial. What it connotes is a plasticity of natural limits, and a natural disrespect of good form. It indicates a tendency toward deformation stretching behavior out of shape from within its own instinctive operation—a transformational movement naturally pushing animal experience to artificially exceed its normal bounds. Supernormal stimuli express a natural tendency toward an affirmation of excessiveness. “Supernormal dynamism” is a better term than “supernormal stimulus” because it better reflects this tendential movement and the relational tenor of its triggering. The term trigger needs revisiting as well. If the signal functioned purely to trigger an automatic sequence of actions, it would be natu rally resistant to the intervention of the supernormal dynamism. It would be firmly under the jurisdiction of the laws of resemblance governing good form and its well-behaved representation. The supernormal dynamism would come in contravention of the laws, relegating it to the status of a simple negative—an infraction— rather than recognizing it as an affirmation of what exceeds the bounds of behavioral norms. It also consigns it to that status of an externality, like an accident whose occurrence doesn’t rightfully belong to the nature of the situation, and simply intrudes. But the supernormal tendency clearly pushes from within, as a dimension of the situation. It does not accidently come up against. It pushes across, with a distinct air of exaggeration. It doesn’t just throw
The Supernormal Animal
the behavioral functioning off its form. It makes the form of the functioning behaviorally vary. It twists the situation into a new relational variation, experientially intensifying it. What is in play is an immanent experiential excess by virtue of which the normal situation presents a pronounced tendency to surpass itself. That Tinbergen was unable to predict which characteristics were determining testifies to the fact that what is at stake is not resemblance to a specific schema serving as a model. The triggering “stimulus” was not in fact isolatable, and was not subject to the necessity of corresponding to a model. The most that can be said is that red as “stimulus” is bound to contrast. The same applies to other qualities ingredient to the situa tion. If they are likewise treated as linked or indissociably bound variables—in other words as relata—the bounds of potential varia tion are stretched. The plasticity of the situation is complicated with additional dimensions. The unpredictability grows with the complexity. For example, the geometric variables of length and thickness of beak, and beak size in comparison to head size, enter into relation with contrast, yielding a color-linked geometry in motion. The rhythm and pattern of the movement express a collective covariation, shaken into further variation by changes in aspect accompanying the quasi-chaotic movements of the hungry chick. The sum total of the qualities ingredient to the variation do not add up to a gestalt. There is no reliable background, any more than there is a fixed figure to stand out against it. When one quality changes, its proximity to others in the directness of their linkage entails a simultaneous variation affecting them all, in something like a relativist curvature of the space-time of behavior. Any variation reverberates across them all with a contagious force of deformation. “Such ‘relational’ or ‘configurational’ stimuli,” Tinbergen observes, “seem to be the rule rather the exception.”6 There is no privileged element capable of extracting itself from its immediate neighborhood with the totality of linked qualities. The color red may well be a favorite of the gulls. But its preeminence can even be endangered by mousy grey. An element that is normally foregrounded is perpetually at risk of sinking back into the gregarity of the moving ground from which it distinguished itself. Any discrete quality may be swallowed back up at any time in the tide of
Brian Massumi
collective variation. Given this general condition of covariant linkage between qualities in immediate experiential neighborhood, it is no wonder that the ethologist was rarely able to predict the response to models including a supernormal element. Even after the fact, it is impossible to identify with certainty which linkage the relational intensification was due to. “So far,” Tinbergen writes, “no one has been quite able to analyze such matters; yet somehow, they are accomplished.”7 At most, it is possible to discern passages toward plastic limits, with periods of relative stasis along the way: vectors of super normality punctuated by stases in the unfolding of instinct’s internal dynamic. In Deleuze and Guattari’s terms, it is more a question of “consistency”—t hat is to say, processual “self-consistency”— than it is a question of a gestalt or perceptual form in any normal sense. Philosopher of science Raymond Ruyer uses the word auto-conduction, which again has the advantage of connoting the dynamism.8 The upshot is that there is an inexpungible element of unpredictability in instinct that pertains not to the outside intervention of accidents, but to the self-consistency of its experiential dynamic. Self-consistency, as distinguished from the accidental, is not a synonym for predictable, regularized, or law abiding. Ruyer makes much of the fact that instinct may trigger in the absence of any stimulus because it demonstrates a capacity for spontaneity. The difference is that spontaneity is not slave to external circumstance. If it is as unpredictable as the accident, it is because it is auto-conducting to excess. Ruyer holds that the capacity for spontaneity, which he qualifies as “hallucinatory,” must be considered a necessary dimension of all instinct.9 Although the spontaneity of instinct cannot be reduced to the accidental, accidents nevertheless play a role. According to Ruyer: We must consider that an animal in a complex, accident-rich environment would have little chance of survival if it could only avail itself of stereotyped movements, even if they were corrected by orienting stimuli. Of far greater importance are responses that are improvised directly upon the stimulus . . . acting as a kind of irritant rather than as a signal.10
The Supernormal Animal
The lesson of the accident: if instinct really lived up to its reputation as a reflex mechanism, it would be downright maladaptive. For this reason, Ruyer replaces the notion of the trigger-signal. The “stimulus” irritates, provokes, stirs. It is a processual “inducer.” It jump-starts an active process, inducing the performance of an “improvisation.” If we put the two terms together, we get a replacement for both “trigger” and “accident” in one terminological stroke: induced improvisation. The improvisation is an integral modification in the tendential self-consistency of animal experience, correlated to the externality of an accident-rich environment but governed by its own stirring logic of qualitative variation. An improvisation is a modification rising from within an activity’s stirring, bringing a qualitative difference to its manner of unfolding. It is immanent to the activity’s taking its own course. The animal’s observable behavior change in the environment is the external face of this immanent modification. In the immanence of its stirring, the modification is hallucinatory: it is “improvised directly” on the percept. It operates in all immediacy in the experiential domain of qualitative neighborhood. The physical environment and the qualitative neighborhood are in close processual embrace, but their dynamics remain distinct. Their difference in nature is never erased. The environment, or external milieu, does in the end impose selective constraints. Its selective principle is and remains that of adaptation. And yet, instinct opposes to the law of adaptation an auto-conducting power of improvisation that answers to external necessity with a supernormal twist. The improvised modification of the instinctive tendency, although externally induced, takes its own spontaneous form. As an improvisation, it is formally self-causing. Evolution is and remains subject to selective pressure. But that is not the question this episode from Tinbergen’s research raises for ethology and by extension for the philosophy of science. Conceptually, the question pertains to relation. Adaptation concerns external relations between an animal and its environment. Selective pressure exerts an external judgment on the fitness of a modification. By contrast, what an improvisation concerns directly, in the tendential neighborhood of its ownmost activity, are “internal” relations: covarying experiential qualities that come
Brian Massumi
of a block, indissociably linked.11 For Deleuze and Guattari, as for Ruyer and Bergson, there is another dynamic generative of variation besides the accident (on the accident of genetic mutation, more later). There is a positive principle of form-generating selection operating in its own neighborhood, autonomously of selective adaptation to external conditions. The peck of the herring gull expresses an inventive power of artifice immanent to the nature of instinct, no less than instinct is immanent to nature. To the adaptive imperative of conformity to the demands of selective pressure, instinct opposes an immanent power of supernormal invention. Faced with a change in the environment, like the sudden appearance of a red-spot-sporting beak on a head, it turns tail, folding back on itself to return to its own neighborhood, there to renew the ties of its native tendency. The accident-rich environment preys upon the instinctive animal. In answer, animal instinct plays upon the environment—in much the sense a musician plays improvisational variations on a theme. Bergson made the point: instinct, he said, is played more than it is represented.12 The ludic ele ment of instinct pries open a margin of play in the interaction between individuals and between the organism and the environment. The blind necessity of mechanistic adaptation selecting schema of automatism is just half the story. Instinctual behavior is ringed, in Ruyer’s words, by a “fortuitous fringe” of induced improvisation.13 Deleuze and Guattari refer to a “creative involution” occurring on that fringe, a phrase itself playing on Bergson’s “creative evolution.” Involution: “to involve is to form a block that runs its line ‘between’ the terms in play and beneath assignable relations” (which is to say, external relations).14 Between individuals, and between the organism and the milieu, runs a tendential line in the direction of the supernormal. It plays directly in the unassignable register of internal relations: immediate qualitative linkages in a solidarity of variation, mutually deforming as a plastic block. The tracing of this “line” of plasticity is unpredictable, but is not strictly speaking accidental or aleatory, being oriented: toward a spontaneous excess of creative self-consistency. The tendency toward the supernormal is a vector. It is not only oriented, it carries a force. For example, a cuckoo chick possesses supernormal traits encouraging the female of another species
The Supernormal Animal
whose nest the cuckoo parasitizes to take it under its wing and nourish it. The host female, Tinbergern says, isn’t “willing” to feed the invader. No, she positively “loves” to do it.15 She does not just do it grudgingly. She does it positively with passion. The force of the supernormal is a positive force. It is not a force in the mechanistic sense. A mechanical force pushes up against a resistance to deliver an impulse determining a commensurate movement in reactive conformity with the quantity of applied force. What we have with the supernormal is a force that pulls forward from ahead, and does so qualitatively: an attractive force. Supernormality is an attractor that draws behavior in its direction, following its own tendency, not in conformity but deformedly and, surpassing normality, without common measure. Supernormality is a force not of impulsion or compulsion, but of affective propulsion. This is why it is so necessary to say that instinct involves the inducement of an effect rather than the triggering of an automatism. It is more stirringly about effecting from within than being caused from without. To do justice to the activity of instinct, it is necessary to respect an autonomy of improvised effect with respect to external causation.16 Instinctive is spontaneously effective, in its affective propulsion. It answers external constraint with creative self-variation, pushing beyond the bounds of common measure. Deleuze and Guattari have a favorite word for affective force that pulls deformationally, creatively ahead, outside common measure: desire. Desire is the other, immanent, principle of selection. Deleuze and Guattari define desire as a force of liaison, a force of linkage conveying a transformational tendency. Desire has no particular object. It is a vector. Its object is before it, always to come. Desire vectorizes being toward the emergence of the new. Desire is one with the auto-conducting movement of becoming. Becoming bears on linked experiential qualities in a solidarity of mutual modification, or what Deleuze and Guattari call “blocks” of sensation.17 It plays upon unpredictable relational effects. It is the improvisation of these deformational, relational effects that constitutes the new. As Deleuze writes, “Form is no longer separable from a transformation or transfiguration that . . . establishes ‘a kind of linkage animated by a life of its own.’ ”18 Creative life of instinct: vital art. Ruyer remarks that it is of the
Brian Massumi
nature of instinctive activity to produce an “aesthetic yield.”19 After all, what is a force of mutual linkage if not a force of composition?20 Deleuze and Guattari ask, “Can this becoming, this emergence,” this composition animating the genesis of new forms with a life of their own and producing an aesthetic yield, be called “Art”? We’ve entered another immediate neighborhood, that of art, the animal, and becoming (evolution played upon by creative “involution”). In this immediate proximity, Deleuze and Guattari write, “What is animal . . . or human in us is indistinct.”21 For if we can call this Art, it is because the human has the same self-animating tendency to supernormality. Only when we experience it in our own desiring lives we arrogantly tend to call it culture as opposed to nature, as if the animal body of human beings was somehow exempt from instinctive activity. As any biologist will tell you, the human body is on the animal continuum. Instinctively, as Deleuze and Guattari might say, we humans are in a “zone of indiscernibility” with the animal.22 Paradoxically, when we return most intently and intensely to that neighborhood “we gain singularly in distinction.”23 It is when the human assumes its immanent excess of animality that it becomes all the more itself. Brilliantly so. “The maximum determination issues from this block of neighborhood like a flash.”24
Addendum: Project Notes A preliminary indication of the direction in which this replaying of the theory of instinct might move, along lines suggested by the opening quotation from Bergson, can be provided by glossing the concept of desire in relation to Gabriel Tarde’s notions of “belief” and “appetition.” Belief for Tarde has no content in the sense of fundamentally referring to an external object. It is a force of liaison binding a multiplicity of lived qualities into a primary perception that is self- effecting (Ruyer’s “hallucinatory” activity). It does not belong to a subject. It is a belonging conditioning the subject’s emergence. The subject does not have or possess belief. Rather, belief is a “possession.” It is the possession of a multiplicity of life-qualities by each
The Supernormal Animal
other. It is this immediate qualitative linkage that constitutes the real conditions of emergence of a subject of experience. In Bergsonian terms, this immediate possession of life qualities by each other can be considered a primary sympathy. Once again, sympathy is not something that is had by a subject. It is not a subjective content of animal life. It is a self-effecting qualitative movement constitutive of the life of the animal. The animal does not have sympathy; it is sympathy. “Appetition” for Tarde is the movement from one such sympathetic perception to another, following a tendency toward expansion. The tendency toward expansion is an “avidity,” corresponding in the present account to the “passion” of the supernormal tendency to excess. The passion of animal life is its creatively outdoing itself. It is the vital movement of animality’s self-improvising, energized by the sympathy that it is. Because this moves animality to surpass what it normally is, it would be more precise to say not that the animal “is” sympathy, but that its becoming is sympathetic. Together, belief and appetition constitute a pulse of “pure feeling.” This feeling is pure in the sense of not yet belonging to a subject, but rather entering actively into its constitution. This is the “block of sensation” for Deleuze and Guattari, or William James’s “pure experience.” The concepts of belief, desire, and pure feeling ground Tarde’s thinking in always-already ongoing minimal activity, rather than any a priori foundation, objective anchoring in external relation, or substance. The possession of belief and the movement of desire that orient sympathetic becoming are immanent to this minimal germinal activity. This is what I call elsewhere “bare activity.”25 The subject issues from this immanence: it is, in Whitehead’s terms, a “superject.” Tarde uses desire as a synonym for appetition. In this essay, desire refers to the co-operation of belief and appetition in Tarde. As employed here, the concept of desire also includes an essential reference to Ruyer’s definition of consciousness as an experiential solidarity between perceptual qualities indissociably bound in “primary liaison.” Primary liaisons are nonmechanistic and, as Deleuze and Guattari emphasize, “non-localizable.” There are not
Brian Massumi
therefore “connections” in any usual sense of the term. At the level of life’s minimal (germinal) activity, consciousness is one with the desiring movement of pure feeling.26 It is only when this primary consciousness comes to fold back onto itself—“reflect” on itself as Bergson had it in the opening quotation—that it secondarily “extends its object.” More precisely, life’s folding back on itself recursively constitutes an object. The object emerges as an effect of this recursivity. The object, Whitehead says, is that which returns.27 This should not be taken to imply that there is a preconstituted object that returns for a subject. It is not so much that the object returns, but rather that life recurs. It cycles back to an already improvised block of sensation, periodically reiterated with a negligible degree of variation—a variation within the bounds of what, for practical purposes, can be treated as the “same.” Thus the object, as it happens, is not one. It is a recursively emergent, reiterative event. What distinguishes an object-event from other species of event is that its reiterability enables it to stand as what William James calls a “terminus”: an attractor for life-composing movements of appetition.28 When the object is not actually present, it has not “withdrawn,” as object-oriented ontology would have it. It is attractively present as a virtual terminus. It is quietly exerting a force of potential. The objective movement of return of events to negligible variation settles in as a countertendency to the supernormal tendency. The already-improvised gels into a nodal point. In tendential orbit around points of return, life’s movement can take on regularity. The “sameness” of the object is the harbinger of this regularization. Through it, the object-event becomes a pivot for life reexpressing itself, with a higher quotient of repetition than variation. Thus it is less that “consciousness extends its object.” It is more that life extends its own activity into objective-event mode. A plane of regu larization bifurcates from life’s self-expressive coursing. Life activity settles into a reiterative ordering of itself, a level organization holding variation to an orbit of negligible variation. Regularization, normalization: it is on this object-oriented level that life’s activity comes to function. This object-oriented normalization of life constitutes what Deleuze and Guattari call the “plane of organization.” The plane of
The Supernormal Animal
organization of function is in reciprocal presupposition with the “plane of consistency” of desire.29 This gives life a double-ordering. Pure feeling is doubled by object-oriented perception, in a play of supernormal tendency and normalizing countertendency: of quali tative excess in vectored becoming, and objective leveling in recursive return; of passion-oriented “minimal” activity, and regularized object-oriented action; of creative spontaneity, and organized function. Always both, in a complexity of mutual imbrication. Evolution cannot be thought apart from this mutual imbrication of contrasting planes of life, and the dynamic tension of their always coming together, never dissolving into each other, never resolving their difference. Evolution is differential. Functional adaptation is only half the story. The other half is spontaneous and creative. On this side lies the primary origination of forms of life (which are nothing if not intensely qualitative, tending to excess in an immediacy of germinal activity). This double-ordering implies that what we normally call “consciousness” is a derivative of a primary, nonreflexive, non-object- oriented consciousness corresponding to a radically different mode of life activity, as yet in no subject’s possession. This primary consciousness is not of something. It is something: sympathy. It “is” the immediacy of sympathetic becoming—a lso called intui tion.30 From primary consciousness’s recurrent becoming comes, paradoxically, its countertendency. The mode of this derivative, or secondary consciousness, is recognition. Recognition takes recursive return for identity. It constrains recurrence to the same. Organization and normalization are predicated on object-oriented countertendency running circles around desire.
Project: • Index the animal to its unrecognizability. • Find in the human the passion of the animal. • Induce human being to recur to its animal becoming. • Take it to heart that “at the heart of the human there is nothing human.”31 For it is in the eminently objectless, immediately relational, spontaneously variation-creating activity flush with instinctive animality—this tendency
Brian Massumi
the human shares with the gull—that the human “gains singularly in distinction,” attaining its own “maximum determination” in a passionate flash of supernormal becoming. • Take it to heart: animal becoming is most human. It is in becoming animal that the human recurs to what is nonhuman at the heart of what moves it. This makes its surpassingly human. Creative-relationally more-than human.32 • Put that in writing. • Remembering that supernormal becoming is a “minimal activity” of life’s exceeding itself. It is modest, to the point of imperceptibility. In the nest or in writing, it is a modest gesture, vanishing even, no more than a flash. Yet vital. Potentially of vital importance. Because it may resonate and amplify and shake life’s regularities to their object- oriented foundations. • Consider that in the primary consciousness of the immediate intuition of animal sympathy “the act of knowledge coincides with the act that generates the real.”33 Not: correlates to. Coincides with. Thought-matter.34 The reality of animality as abstraction. Abstraction as the movement of the real. • Improvise on that.
Notes 1. Henri Bergson, Creative Evolution, trans. Arthur Mitchell (New York: Henry Holt and Company, 1911), 238. 2. Raymond Ruyer, La genèse des formes vivantes (Paris: Flammarion, 1958), 260, 238. Translations mine. 3. For a more extended consideration of these questions, see Brian Massumi, What Animals Teach Us about Politics (Durham, N.C.: Duke University Press, 2014). 4. Niko Tinbergen and A. C. Perdeck, “On the Stimulus Situation Releasing the Begging Response in the Newly Hatched Herring Gull Chick,” Behavior 3, no. 1 (1950): 1–39. The same effect had been observed by O. Köhler in 1937.
The Supernormal Animal
5. Niko Tinbergen, Animal Behavior (New York: Time-Life Books, 1965), 67. 6. Ibid., 68. 7. Ibid. “There is no absolute distinction between effective sign- stimuli and the non-effective properties of the object. . . . The full significance of supernormal stimuli is not yet clear.” Niko Tinbergen, The Study of Instinct (Oxford: Oxford University Press, 1951), 42, 46. Tinbergen, however, does not integrate these observations into his theory of instinct overall. They remain isolated musings. Deleuze and Guattari critique the predominance of the stimulus-trigger-automatism model in Tinbergen’s thinking; see Gilles Deleuze and Félix Guattari, A Thousand Plateaus, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987), 327–28. 8. “This is a question of consistency: the ‘holding-together’ of heterogeneous elements. At first, they constitute no more than a fuzzy set.” Deleuze and Guattari, A Thousand Plateaus, 323. They characterize life itself in terms of a “gain in consistency,” for which they use “self- consistency” (auto-consistance) as a synonym (335). They go on to define consistency as a “surplus-value of destratification” (emphasis in original, 336). For Ruyer on auto-conduction, see Genèse des formes vivantes, 65. 9. Ruyer, Genèse des formes vivantes, 146–47. 10. Ibid., 149. Ruyer says the same of internal “signals” such as hormones, which according to his account induce a relational effect of covariation that is in every respect analogous to what occurs in the case of fields of external perception. 11. On expressive qualities and internal relations, see Deleuze and Guattari, A Thousand Plateaus, 317–18, 329. 12. Bergson, Creative Evolution, 145, 180. Joué is translated as “acted.” 13. Ruyer, Genèse des formes vivantes, 142. 14. Deleuze and Guattari, A Thousand Plateaus, 239. 15. Tinbergen, Animal Behavior, 67. 16. On cause and effect as belonging to different orders, see Gilles Deleuze, Logic of Sense, ed. Constantin V. Boundas, trans. Mark Lester with Charles Stivale (New York: Columbia University Press, 1990), 6–7. Deleuze ties the independence of effects to language. The present account does not follow him in this regard. 17. On blocks of becoming, see Gilles Deleuze and Félix Guattari, ch. 10, “Becoming-Intense, Becoming-Animal, Becoming-Imperceptible . . . ,” in A Thousand Plateaus, 232–309; and Gilles Deleuze and Félix Guattari, ch. 8, “Blocks, Series, Intensities,” in Kafka: For a Minor Literature, trans. Dana Polan (Minneapolis: University of Minnesota Press, 1986), 53–62.
Brian Massumi
On the associated concept of blocks of sensation, see Gilles Deleuze and Félix Guattari, ch. 7, “Percept, Affect, Concept” in What Is Philosophy?, trans. Graham Burchell and Hugh Tomlinson (New York: Columbia University Press, 1994), 163–99. 18. Gilles Deleuze, Francis Bacon: The Logic of Sensation, trans. Daniel W. Smith (Minneapolis: University of Minnesota Press, 2003), 104. In the English translation, the French phrase une liaison animée par une vie propre is rendered as “a love affair kindled by a decent life.” I take the phrase much more literally. The word liaison is used throughout Deleuze and Deleuze/Guattari’s work in reference to and in resonance with Ruyer’s thought to mean an unassignable (nonlocal) “linkage,” and the idea of an immanent life of form (“animated by a life of its own”) fits the context of this passage, which is working from the thought of Wilhelm Worringer. 19. Ruyer, Genèse des formes vivantes, 142. 20. Deleuze, Francis Bacon, 104. 21. Deleuze and Guattari, What Is Philosophy?, 174. 22. Deleuze and Guattari, A Thousand Plateaus, 273, 279, 293–94, 305. 23. Deleuze and Guattari, What Is Philosophy?, 174 (translation modified). 24. Ibid., 174 (translation modified). 25. Brian Massumi, Semblance and Event: Activist Philosophy and the Occurrent Arts (Cambridge, Mass.: MIT Press, 2011), 1–3, 5, 10–11, 22, 23, 27; and Brian Massumi, “Perception Attack: Brief on War Time,” Theory & Event 13, no. 3 (2010), http://muse.jhu.edu/journals/theory_and_event/ v013/13.3.massumi.html. 26. For an analysis of belief, appetition, avidity, and pure feeling in Tarde, see Didier Debaise, “La métaphysique des possessions: puissances et société chez G. Tarde,” Revue de métaphysique et de la morale 4, no. 60 (2008): 447–60. On consciousness as transspatial liaison, see Raymond Ruyer, Le néo-finalisme (Paris: PUF, 1952), 113. On non-localizable liaison and becoming, see Deleuze and Guattari, A Thousand Plateaus, 413. 27. “Objects are the elements in nature that can ‘be again.’ ” Alfred North Whitehead, Concept of Nature (Cambridge: Cambridge University Press, 1964), 143. 28. William James, Essays in Radical Empiricism (Lincoln: University of Nebraska Press, 1996), 39–91. 29. On the plane of organization and the plane of consistency, see Deleuze and Guattari, A Thousand Plateaus, 9, 21, 70–73, 251–52, 265–72. 30. For an excellent analysis of intuition and sympathy in Bergson (which in Bergson’s texts are not actually as synonymous as they are presented to be here), see David Lapoujade, Puissances du temps. Versions de Bergson (Paris: Editions Minuit, 2010).
The Supernormal Animal
31. Lapoujade, Puissances du temps, 62. 32. Erin Manning develops the concept of the more-than-human as a relational “ecology of practices” in Always More Than One: Individuation’s Dance (Durham, N.C.: Duke University Press, 2013). 33. Henri Bergson, Mélanges (Paris: PUF, 1972), quoted in Lapoujade, Puissances du temps, 35. 34. On the plane of consistency as matter, see Deleuze and Guattari, A Thousand Plateaus, 43.
This page intentionally left blank
9 2 0
Consequences of Panpsychism Steven Shaviro
What is it like to be a rock? Rudy Rucker’s science fiction short story “Panpsychism Proved” (2007) provides one possible answer. An engineer at Apple named Shirley invents a new “mindlink” technology, which allows people to “directly experience each other’s thoughts.” When two individuals swallow “microgram quantities of entangled pairs of carbon atoms,” they enter into direct telepathic contact. Shirley hopes to seduce her coworker Rick by melding their minds together. Unfortunately, he has other plans. She ingests a batch of entangled carbon particles; but Rick dumps his corresponding batch on a boulder. Instead of getting in touch with Rick, Shirley finds that “the mind she’d linked to was inhuman: dense, taciturn, crystalline, serene, beautiful. . . .” She fails in her quest for sex and deeper human contact. But she finds solace through intimacy with a “friendly gray lump of granite. How nice to know that a rock had a mind.”1 Panpsychism is the thesis that even rocks have minds. More formally, David Skrbina defines panpsychism as “the view that all things have mind or a mind-like quality. . . . Mind is seen as fundamental to the nature of existence and being.”2 Or in the slightly different words of Thomas Nagel, who entertains the notion without fully endorsing it, panpsychism is “the view that the basic physical constituents of the universe have mental properties, whether or not they are parts of living organisms.”3 Most broadly, panpsychism makes the claim that mind, or sentience, is in some manner, as Rucker claims, “a universally distributed quality.”4 In opposition to idealism, Cartesian dualism, and eliminativist physicalism alike, panpsychism maintains that thought is neither merely 9 19 0
Steven Shaviro
epiphenomenal nor something that exists in a separate realm from the material world. Rather, mind is a fundamental property of matter itself. This means that thinking happens everywhere; it extends all the way down (and also all the way up). There are differences of degree in the ways that entities think, but no fundamental differences of kind. Because it makes such seemingly extravagant claims, pan psychism is easily subject to derision and ridicule. The most common response to it is probably the one epitomized by the philosopher Colin McGinn, who calls it “a complete myth, a comforting piece of utter balderdash. . . . Isn’t there something vaguely hippy ish, i.e., stoned, about the doctrine?”5 Even Galen Strawson, the best-known contemporary analytic philosopher to embrace panpsychism, admits that the doctrine “sounded crazy to me for a long time”; he finally got “used to it,” he says, only when he became convinced that there was “no alternative.”6 However stoned or crazy it might sound, panpsychism in fact has a long philosophical pedigree, as Skrbina amply demonstrates in Panpsychism in the West. From the pre-Socratics, on through Baruch Spinoza and Gottfried Wilhelm von Leibniz, and down to William James and Alfred North Whitehead, panpsychism is a recurring underground motif in the history of Western thought. It was under eclipse in the second half of the twentieth century, but in recent years it seems to have returned with a vengeance. No less than three anthologies of essays on panpsychism have been published in the past decade, with contributions by analytic and continental philosophers alike.7 Panpsychism seems especially relevant today, in the light of the “nonhuman turn” in critical discourse, and the new philosophical movements that are gathered under the rubric of “speculative realism.” In any case, panpsychism has never been a mainstream philosophical doctrine. But it has persisted as a kind of countertendency to the anthropocentrism, and the hierarchical ontologies, of dominant philosophical dogmas. Panpsychism offers a rebuke both to extravagant idealism on the one hand, and to reductionism and eliminativism on the other. The problem with panpsychism, for most people, is evidently one of extension. What can it mean to attribute mentality to all entities in the universe, without exception? Modern Western philoso-
Consequences of Panpsychism
phy, from the Cartesian cogito through the Kantian transcendental subject and beyond, is grounded upon an idealization of the human mind—or more narrowly, upon the rationality that is supposed to be one of the powers of the human mind. And much of this tradition has sought to overcome the apparent problem of solipsism, or skepticism regarding “other minds,” by appealing to a sensus communis, or a linguistic ability, that all human beings share. In this way, our minds are the guarantors of our commonality. But how far can the ascription of mentality be extended beyond the human? To begin with, can I rightly say that my cat thinks and feels? Many philosophers have in fact said no. Descartes notoriously argued that animals were nonthinking automata. Heidegger maintained that animals (in contrast to human beings) were intrinsically “poor in world.” Recent thinkers as diverse as Richard Rorty, Jacques Rancière, and Slavoj Žižek continue to endorse human exceptionalism, because they all insist upon the centrality of linguistic forms (conversation for Rorty, linguistic competence for Rancière, or the Symbolic order for Žižek) as the basis for a sort of Kantian universal communicability. Even today, it is still often argued that nonhuman animals do not really think, because they are incapable of language, or because they do not have an awareness of mortality, or because they supposedly lack the capacity to make rational inferences. Robert Brandom, for instance, distinguishes mere sentience, or “mammalian sensuousness,” such as my cat might feel, from the sapience that supposedly human beings alone possess; for Brandom, only the latter is “morally significant.”8 Following Brandom, Pete Wolfendale argues that “nothing has value for animals, because there’s no sense in which their behaviour could be justified or unjustified. This is the essence of the difference between us and them: animals merely behave, whereas we act.”9 In spite of such arguments, both philosophical claims and common opinion have shifted in recent years more fully in favor of recognizing the mentality of at least the higher animals (mammals and birds, and possibly cephalopods). I presume that most people today would agree that dogs and cats have minds. That is to say, these animals think and feel; they have inner qualitative experiences, they register pleasure and pain, and they make decisions. But does a lobster similarly think and feel? Does a jellyfish? Does a
Steven Shaviro
tree? Does the slime mold Physarum polycephalum? In fact, there is good scientific evidence that all living organisms, including such brainless ones as plants, slime molds, and bacteria, exhibit at least a certain degree of sentience, cognition, decision making, and will.10 But what about things that are not alive? How many non-stoned people will agree with Rudy Rucker that a rock has a mind? Or for that matter, a neutrino? According to Whitehead, Leibniz “explained what it must be like to be an atom. Lucretius tells us what an atom looks like to others, and Leibniz tells us how an atom is feeling about itself.”11 But who today is Leibnizian and Whiteheadean enough to assert that an atom, or a neutrino, feels anything whatsoever about itself? Few advocates of panpsychism would expect that the doctrine could literally be verified by scientific experiment, as happens in Rucker’s whimsical story. For panpsychism makes an ontological claim, rather than a necessarily empirical one. Even if we were able, as Whitehead once put it, to “ask a stone to record its autobiography,” the results would probably not be very edifying or exciting.12 It is not a question, therefore, of actually getting a rock or a neutrino to speak; but rather one of recognizing that mentality, or inner experience, is not contingent upon the ability to speak in the first place. Indeed, direct telepathic contact—like that portrayed in Rucker’s story—is not likely to be possible, even between speaking human subjects. This is because any such contact would end up being public and external, precisely in the way that speech already is. Inner experience—sensations, qualia, and the like—would remain untouched. Panpsychism is not predicated upon the possibility of what Graham Harman calls “human access” to other entities and other minds, whether they be human or nonhuman.13 To the contrary, panpsychism’s insistence upon the mentality of other entities in the world also implies the autonomy of all those entities from our apprehension—and perhaps even from our concern. When panpsychism insists upon the mentality of lobsters, neutrinos, and lumps of granite, what it is saying in the first instance is that these entities exist pour soi as well as en soi. They are autonomous centers of value. By this, I mean that it is not just a matter of how we value lobsters, or neutrinos, or lumps of granite, but also of the ways in which these entities value themselves—and
Consequences of Panpsychism
differentially value whatever other entities they may happen to encounter. For entities do indeed value themselves. In the first instance, they do so by the very act of persisting through time, and establishing themselves as what Whitehead calls “enduring objects” (PR, 35, 109). This active persistence is more or less what Spinoza calls conatus, or what Levi Bryant calls the “ongoing auto poiesis” of objects.14 I am not entirely happy with these terms, however. Conatus and autopoiesis seem to me to put too exclusive an emphasis upon the entity’s self-reproduction and maintenance of its identity, or upon what Bryant calls its “endo-consistency.”15 But the value activity of an entity that persists through time is not just a matter of self-perpetuation, or of the continually renewed achievement of homeostatic equilibrium. It may well also involve growth or shrinkage, and assimilation or expulsion, or an active self-transformation and becoming-other. All these can be characterized as what Whitehead calls “conceptual initiative,” or “the origination of conceptual novelty” (PR, 102). Such processes are more akin to what Gilbert Simondon calls “individuation,” and to what the Whiteheadian poet Charles Olson calls “the will to change,” than they are to conatus or autopoiesis.16 In any case, the active self-valuation of all entities is in fact the best warrant for their sentience. For value activity is a matter of feeling, and responding. Whitehead defines value, or worth, as an entity’s “sense of existence for its own sake, of existence which is its own justification, of existence with its own character.” Each cat or dog has “its own character,” and so does each lobster, and even each neutrino. For Whitehead, “the common fact of value experience” constitutes “the essential nature of each pulsation of actuality. Everything has some value for itself, for others, and for the whole. . . . Existence, in its own nature, is the upholding of value intensity. . . . [Every entity] upholds value intensity for itself. . . .”17 In other words, each entity has its own particular needs and desires, which issue forth in its own affirmations of value. These are bound up in the very being of the entities themselves. Rather than saying (with David Hume) that values cannot be derived from facts, or (with the early Ludwig Wittgenstein) that value “must lie outside the world,” we should rather say that multiple values and acts of valuation are themselves irrefutable facts within the world.18 These
Steven Shaviro
values and valuations all belong to “a common world,” as Whitehead says—indeed, they are immanent to the very world we live in.19 But each of these values and valuations also exists in its own right, entirely apart from us; and the values of other entities would still continue to exist in the world without us. The problem, then, is not to derive an “ought” from an “is,” but to see how innumerable “oughts” already are. Contra Wolfendale, nonhuman animals do continually ascribe value to things, and make decisions about them—even if they do not offer the sorts of cognitive justifications for their value-laden actions that human beings occasionally do. And contra Brandom, this is indeed a morally significant fact; for as Whitehead puts it, “We have no right to deface the value experience which is the very essence of the universe” (MT, 111). The standard retort to the Whiteheadian value argument that I have just been making is, of course, to accuse it of anthropomorphism. When Whitehead claims that nonhuman entities have values and experiences, that they have particular points of view, and that they think and make decisions, is he not imputing human categories to them? I argue, however, that making such a charge is begging the question. For the accusation of anthropomorphism rests on the prior assumption that thought, value, and experience are essentially, or exclusively, human to begin with. And I can see no justification for this. Our own value activities arose out of, and still remain in continuity with, nonhuman ones—as we have known at least since Darwin. We perpetuate anthropocentrism in an inverted form when we take it for granted that a world without us, a world from which our own values have been subtracted, is therefore a world devoid of values altogether. After all, even Cthulhu has its own values—however much we may dislike them and (rightly) feel threatened by them. The same goes for the anopheles mosquito, and for the (recently exterminated) smallpox virus. I think that this persistence of nonhuman values is a serious problem for the “eliminativist” versions of speculative realism, such as those of Quentin Meillassoux and Ray Brassier.20 There is no reason why overcoming what Meillassoux calls “the correlation between thinking and being” should require the extirpation of thought (or knowledge, or experience) altogether.21 For a more nuanced approach to the question of nonhuman
Consequences of Panpsychism
minds and nonhuman values, I turn to Thomas Nagel’s famous 1974 article “What Is It Like to Be a Bat?” Nagel argues that “the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.” He further explains that “what it is like,” as he uses the term, “does not mean ‘what (in our experience) it resembles,’ but rather ‘how it is for the subject himself.’ ”22 A bat’s sonocentric experience—or for that matter, a dog’s olfactocentric one—is so different from the oculocentric experience of human beings that we will never be able to literally feel, or entirely understand, “what it is like” to be a bat or a dog. The best we can do is to create metaphors and similes—or as I would rather say, aesthetic semblances—that allude in some way to chiropteran or canine existence. Graham Harman rightly remarks that “allusion and allure are legitimate forms of knowledge,” but also that they are necessarily partial and incomplete.23 Likeness-in-human-terms, if it is projected imaginatively enough, may work to dislocate us from the correlationist position of understanding these other entities only in terms of their resemblance, and relationship, to ourselves. But it can never actually attain the inner being of those other entities. Nagel therefore argues for a much stronger sense of “likeness.” For him it is not just a matter of our trying to explain what being a bat might be like in human terms. It is also, and more important, a question of what being a bat is like for the bat itself. Such is the project of what Ian Bogost, following up on Nagel, calls “alien phenomenology.”24 As Nagel puts it, “The experiences of other creatures are certainly independent of the reach of an analogy with the human case. They have their own reality and their own subjectivity.”25 In affirming this, Nagel moves from the problem of access to the problem of being: from epistemology (the question of how we can know what a bat is thinking) to ontology (grasping that the bat is indeed thinking, and that this thinking is an essential aspect of the bat’s own being, even though we cannot hope to comprehend it). It is evidently “like something” to be a bat; but we will never be able to imagine, or to state in words, just what that “something” is. The point is a double one. The bat’s thinking is inaccessible to us; we should not anthropomorphize the bat’s experience by modeling it on our own. But we also should not claim that, just because it is
Steven Shaviro
nonhuman, or not like us, the bat cannot have experiences at all. These are really just two sides of the same coin. We need to accept both that the bat does have experiences and that these experiences are radically different from ours, and may have their own richness and complexity in ways that we will never be able to understand. The bat’s inner experience is inaccessible to me; but this is so in much the same way (albeit to a far greater extent) that any other person’s inner experience is inaccessible to me. It is even similar to the way that my own inner experience is in fact also inaccessible to me. This is because of the strange ontological status of “experience.” I think that the later Wittgenstein is surprisingly relevant here—in spite of the fact that he is usually taken to be rejecting the very notion of mental states and inner (private) experiences. Wittgenstein does indeed say that the representations we make of our inner sensations are “not informative,” and that it is incoherent to speak of such sensations in the same ways that we speak of physical things.26 A toothache is not an object of perception in the way that a tooth is: you can see or touch my tooth, but you cannot see or touch my toothache. Indeed, the way that I feel a toothache in my tooth is vastly different from the way that I apprehend the tooth itself by touching it with my tongue or finger, or looking at it in the mirror, or even knowing its place through proprioception. This line of argument has often been used, in post- Wittgensteinian analytic philosophy, in order to deny the existence, or the meaningfulness, of “qualia” or inner sensations altogether. But Wittgenstein himself does not do this. Rather, he explicitly warns us against denying or discounting the reality of inner experience on such a basis: “Just try—in a real case—to doubt someone else’s fear or pain!” (PI, #303). After all, he asks, “What greater difference could there be” than that between “pain-behaviour with pain and pain-behaviour without pain”? (PI, #304). Inner sensation, Wittgenstein concludes, is “not a Something, but not a Nothing either!” (PI, #304). What he means by this is that first-person experience cannot possibly be a matter of third-person, objective knowledge. First-person experience is not a Something, because—in contrast to the behavior that expresses it—it cannot be pointed to, or isolated by an observer, or made subject to scientific experimentation. But since this inner sensation, or first-person experience, is
Consequences of Panpsychism
“not a Nothing either,” it also cannot be eliminated, or dismissed as meaningless. This is why it is wrong to regard Wittgenstein as a behaviorist or an anti-internalist—although he has most commonly been interpreted this way. Thus Daniel Dennett conceives himself to be completing the Wittgensteinian revolution in philosophy, by striving to “extirpate” the very notion of “qualia that hide forever from objective science in the subjective inner sancta of our minds.” Dennett takes the final reductionist step that Wittgenstein himself refuses to take—and he seems unable to understand Wittgenstein’s refusal. Indeed, Dennett goes so far as to accuse Wittgenstein of trying “to hedge his bets” with the escape clause that inner sensation is “not a Nothing either.” In moving toward a full-fledged eliminativism, however, Dennett throws out the baby with the bathwater. He destroys Wittgenstein’s very point, in the act of trying to extend it.27 Wittgenstein’s critique in the Philosophical Investigations is in fact directed as much against the functionalism and scientism that Dennett so uncritically embraces, as it is against the old idealist metaphysics of the likes of, say, F. H. Bradley. Where idealism seeks to transform qualia into objectifiable facts, scientism seeks to eliminate qualia altogether, on the ground that they cannot be transformed into objectifiable facts. But Wittgenstein opposes both of these moves, for the same reason. He argues that not every thing in the world is a matter of fact. That is to say, he explicitly contradicts the claim, from his own earlier Tractatus, that “the world is all that is the case,” and that “the world divides into facts” that are entirely separate from one another (sec. 1 and 1.2). The point of Wittgenstein’s later thought in the Philosophical Investigations is precisely to grasp the peculiar, yet ontologically positive, status of nonthings or nonfacts (such as qualia or inner sensations). And this can only be done by disabusing us of the notion, either that such experiences are “facts” like all the others, or that they can be “explained away” by reduction to facts. Wittgenstein thus resists the imperialistic pretensions of global idealism and global scientism alike, both of which wrongly seek to encompass everything within their own theoretical constructions. Wittgenstein further develops his point about inner sensations in a deliberately paradoxical formulation: “I can know what
Steven Shaviro
someone else is thinking, not what I am thinking” (PI, #315). Since I have feelings like fear and pain, it is either redundant or misleading to say that I know I have them. The use of the word know in such a case implies a confusion. For there is really no epistemological issue here at all. I do not need to “know” what I am thinking in order to think it. If I am in pain, I do not need to provide grounds for proving to myself that I am so. My being in pain is therefore not a matter of “justified true belief.” It is a kind of category error, to think that my actual experience of fear or pain is somehow dependent upon the question of how I have “access” to it, or how I am able to know that I am experiencing it. At the same time, I can rightly say that I know what you are thinking; for here I am able to cite grounds in order to justify my belief. Perhaps I know what you are thinking because you have told me; or perhaps I gather it from your facial expression, or from the way that you are acting (laughing uproariously, or doubling over in pain). Of course, I may in fact be mistaken as to what you are thinking; indeed, you may be acting in the way you are with the deliberate aim of deceiving me. But these sorts of errors can always be cleared up, at least in principle, through additional empirical evidence. When it comes to my own case, the question of knowledge plays out in much the same way. I cannot be directly mistaken about being in pain. However, I can deceive myself about my own mental state; recent psychological experiments suggest that this happens more often than not. In this way, I might not know that I am in pain. Also, I may well be in error when I try to analyze my pain in discursive terms and specify to myself just what it is that I am thinking and feeling. This is because, to the extent that I do know what I myself am thinking, I am making inferences about my own thinking from the outside, in the same way that I make inferences about the mental states of others. If we try to extend Wittgenstein’s line of questioning to non human others, then the problem is evidently one of language— since Wittgenstein is so concerned with forms of speech in particular. Nagel expresses a certain uneasiness with Wittgenstein’s account, because “it depends too heavily on our language. . . . But not all conscious beings are capable of language, and that leaves the difficult problem of how this view accommodates the subjectivity
Consequences of Panpsychism
of their mental states.”28 However, Nagel goes on to alleviate this difficulty: We ascribe experience to animals on the basis of their behavior, structure, and circumstances, but we are not just ascribing to them behavior, structure, and circumstances. So what are we saying? The same kind of thing we say of people when we say they have experiences, of course. . . . Experience must have systematic connexions with behavior and circumstances in order for experiential qualities and experiential similarity to be real. But we need not know what these connexions are in order to ask whether experience is present in an alien thing.29 Evidently, my cat does not tell me what she wants in words—as another human person would be able to do. Nonetheless, I can often rightly say that I know, from observation, what my cat is thinking. (She wants dinner, she wants me to brush her, she wants to be left alone.) More important, even when I cannot tell what my cat is thinking, I can at least tell that she is thinking. I know that the “connexions” are there, even if I don’t know what they are; and I know that “experience” requires such connexions, but also that it cannot be reduced to them. My cat’s inner experiences are in no way dependent upon my ability to “translate” them into my own terms; nor are they vitiated by her inability to justify them by means of predicative judgments, or to articulate the “inferential relations” implied by the conceptualization of these experiences.30 All this implies that language should not be accorded too privi leged a place in our inferences about inner experience; much less should language be necessary, in order for some sort of inner experience to exist. What David Chalmers calls the “hard problem” of consciousness indeed plays out the same way in relation to a bat, or a cat—or for that matter, in Chalmers’s notorious example, to a thermostat—that it does in relation to another human being.31 In the latter case, species similarity and the common ability to speak allow us to describe “what it is like” for the other person a bit more extensively; but this is only a difference of degree, not one of kind. An extreme behaviorist will deny the existence of interiority in
Steven Shaviro
speaking human beings as well as in nonspeaking animals and nonliving thermostats. But there is no justification for inferring interiority on the basis of linguistic behavior, while at the same time refusing to make such an inference in the case of other sorts of observed behavior. With or without language, therefore, we are observing the behavior of others (or even of ourselves), in order to infer the existence of an inner experience that, in its own right, is irreducible to observable behavior. Following Wittgenstein’s suggestions, we must say that this inner experience indeed exists; but it does so in a quite particular manner. Inner mental states, such as sensations and experiences, are not reducible to discursive language, for the same reason that they are not objectifiable as “facts” that can be observed directly in the third person. “What it is like to be a bat” is not a Something: for it is not specifiable as a thing at all. But the bat’s inner experience is not a Nothing either. This means that it is indeed “like something” to be a bat, even though “what it is like” is not a Something. This distinction is not a mere play on words, but a basic ontological condition. The mentality of a bat cannot be displayed objectively, but it also cannot simply be dismissed, or explained away. A bat’s experience—or a human being’s, for that matter—is indubitable and incorrigible; but at the same time, it is spectral, impalpable, and incommunicable. Indeed, this is the reason why the very attempt to discuss subjective experience in terms of qualia, precise sensations, and the like, is—as Wittgenstein suggested—not very useful. As Whitehead continually points out, most experience is vague and indistinct. We largely confront “percepta which are vague, not to be controlled, heavy with emotion.” Primordial experience involves a sense of influx of influence from other vaguer presences in the past, localized and yet evading local definition, such influence modifying, enhancing, inhibiting, diverting, the stream of feeling which we are receiving, unifying, enjoying, and transmitting. This is our general sense of existence, as one item among others, in an efficacious actual world. (PR, 178)
Consequences of Panpsychism
Or, as Whitehead puts it in an earlier passage in Process and Reality, “The primitive experience is emotional feeling”; but “the feeling is blind and the relevance is vague” (PR, 163). Very few aspects of our experience are clear and distinct; we can only obtain “a clear- cut experience by concentrating on the abstractions of consciousness” and excluding everything else (MT, 108). Whitehead suggests that the fatal mistake of philosophers from Descartes through Hume was to restrict themselves to such abstractions, by taking “clear and distinct ideas” as their starting point. We may say much the same about analytic philosophers today who argue about qualia. For the problem with speaking of “qualia” at all—pro or con—is that, by invoking them in the first place, we have already distorted them by extracting them from the Jamesian stream of consciousness in which they occur. Once we have done so, it is easy enough to take the further step that Dennett does, and “prove” that they do not exist at all. In other words, Dennett’s eliminativism is merely the reductio ad absurdum of the premises that he shares with his opponents. Most of our experience is already lost, once it has been analyzed in detail and divided into discrete parts. All these discussions in the philosophy of mind miss the point, because mentality is both far more diffuse, and far more widespread, than these thinkers realize. Such is Whitehead’s version of the claim that mentality is neither a Something nor a Nothing. I think that Galen Strawson’s argument for panpsychism makes the most sense if it is read in the light of these considerations. Strawson argues that mentality of some sort—whether we call it “experience, ‘consciousness,’ conscious experience, ‘phenomenology,’ experiential ‘what-it’s-likeness,’ feeling, sensation, explicit conscious thought”—is “the phenomenon whose existence is more certain than the existence of anything else.” Everything that we know about the world, and everything that we do in fact experience, is dependent upon the prior condition that we are able to have experiences in the first place. The mental, for Strawson, is therefore not something that we can point to: we have already pre-assumed it, even before we look for it explicitly. Therefore, he says, we must reject “the view—the faith—that the nature or
Steven Shaviro
essence of all concrete reality can in principle be fully captured in the terms of physics.” Indeed, according to Strawson, the only way to explain “the nature or essence of experience” in “the terms of physics” is to explain it away, eliminating it almost by definition. Reductionists like Dennett end up trying to “deny the existence of experience” altogether—a move that Strawson regards as absurd and self-refuting.32 In insisting that we are more sure of our own conscious experience than of anything else, Strawson knowingly echoes the Cartesian cogito. But he gets rid of the dualism, and the reification, that have always been seen as the most problematic parts of Descartes’s argument. For Strawson, “experiential phenomena” are real in their own right, and there cannot be an experience without an experiencer. But at the same time, Strawson makes no particular claim about the nature of the “I” that thinks; and he certainly does not pronounce himself to be “a thinking thing.” Where Descartes posited mind as entirely separate from matter or extension, Strawson makes precisely the opposite move. Given the evident reality of the mental, together with a basic commitment to what he calls “real physicalism,” he says that we must reject the common assumption “that the physical, in itself, is an essentially and wholly non-experiential phenomenon.”33 If we reject dualism and supernaturalism, then mentality itself must be entirely physical. This might seem to be altogether reasonable once we have accepted—as Whitehead already urged us to do, nearly a century ago, and as many speculative realists and “new materialists” now assert—that matter is not inert and passive, but immanently active, productive, and formative. However, this is not quite Strawson’s claim. For he is not arguing for the vibrancy of matter on the basis of quantum theory, as Whitehead did, and as Karen Barad currently does, nor is he arguing for it on the basis of the new sciences of complexity and emergence, as Jane Bennett, Manuel De Landa, and other “new materialists” tend to do.34 Rather, Strawson’s position is radically anti–systems theory and anti-emergentist. He rejects the idea that anything nontrivial can emerge on a higher level that was not already present in, and linearly caused by, microconstituents at a lower level. Wetness can arise from the agglomeration of water molecules that are not in themselves wet; this is something
Consequences of Panpsychism
that physics has no trouble explaining.35 And, although we do not know for sure how life originally came out of nonlife, we are able at least to develop plausible and coherent physico-chemical scenarios about how it might have happened. The emergence of life—which seemed so mysterious to the nineteenth-century vitalists—does not trouble us metaphysically any longer. But Strawson insists that “one cannot draw a parallel between the perceived problem of life and the perceived problem of experience in this way, arguing that the second problem will dissolve just as the first did, unless one considers life completely apart from experience.”36 According to Strawson, physics cannot even begin to explain how sentience could arise out of some initially nonsentient matter. Even if we discover the neural correlates of consciousness, that holy grail of contemporary neuroscience, this will not tell us anything about how and why inner experience is materially possible in the first place. Strawson’s rejection of what he calls “brute emergence” rests on an unquestioned scientific reductionism, or on what Sam Coleman calls “smallism”: “the view that all facts are determined by facts about the smallest things, those that exist at the lowest ‘level’ of ontology.”37 This is a position that most new materialists, and non- eliminativist speculative realists, would never accept. Yet I think that we would do well to entertain Strawson’s position to a certain extent, if only because it offers some resistance to our facile habit of using “quantum indeterminacy” and “higher-level emergence” like magic wands to account for whatever it is that we do not know how to explain. As Strawson puts it, “It is built into the heart of the notion of emergence that emergence cannot be brute in the sense of there being absolutely no reason in the nature of things why the emerging thing is as it is (so that it is unintelligible even to God).”38 Of course, Quentin Meillassoux, with his notion of the necessity of contingency, maintains precisely this.39 Radical or brute emergence reaches the point of its reductio ad absurdum in Meillassoux’s claim that life and thought both arose miraculously, by pure contingency, out of a previously dead and inert universe.40 Meillassoux in fact argues for the origin of pure novelty out of nothing.41 But if we are to reject miracles, and maintain—as Meillassoux emphatically does not—some version of the principle of sufficient reason, or of Whitehead’s ontological principle (which
Steven Shaviro
states that “there is nothing which floats into the world from nowhere” [PR, 244]), then we must accept that novelty cannot emerge ex nihilo. Whitehead indeed says that creativity, or “the principle of novelty,” is “the universal of universals characterizing ultimate matter of fact” (PR, 21). But he also insists that novelty is only possible on the basis of, and in response to, “stubborn fact which cannot be evaded” (PR, 43). Newness always depends on something prior. It’s a bit like the way a DJ creates new music by sampling and remixing already existing tracks. A similar logic leads to Strawson’s insistence that sentience must already have been present, at least potentially, from the very beginning. What’s most interesting about Strawson’s argument is how it leads him into a paradoxical tension, or a double bind. Strawson, like most analytic philosophers, is a scientific reductionist, and yet he maintains that subjective experience is irreducible. He insists that everything is “physical” and reducible to its ultimate microcomponents; and that mentality is as real, and therefore as “physi cal,” as anything else. And yet Strawson also asserts that mentality is entirely inaccessible to scientific explanation. The very phenomenon of being able to have experiences—the phenomenon that alone makes objective, third-person knowledge possible in the first place—cannot itself be accounted for in science’s objective, third-person terms. There is no way to bridge the gap between first- person and third-person perspectives. Strawson refuses to alleviate this tension by adopting any of the usual philosophical dodges (dualism, emergentism, and eliminativism). Instead, he adopts the ontological postulate that mentality must already be an aspect, or a basic quality, of everything that exists. This is why “experience” cannot be limited to human beings, or even to living things in general. Panpsychism is the necessary consequence of respecting the self-evidence of phenomenal experience, without trying either to hypostasize it or to extirpate it. Thought is not a specifiable, separable Something, but neither is it a mere vacancy, a Nothing. It is rather the inner, hidden dimension of everything. “All physical stuff is energy, in one form or another,” Strawson says, “and all energy, I trow, is an experience-involving phenomenon.”42 In this regard, Strawson’s position is not far from Whitehead’s.
Consequences of Panpsychism
In discussing how his own philosophy of process (or of “organism”) relates to the discoveries of twentieth-century science, Whitehead writes: If we substitute the term “energy” for the concept of a quantitative emotional intensity, and the term “form of energy” for the concept of “specific state of feeling,” and remember that in physics “vector” means definite transmission from elsewhere, we see that this metaphysical description of the simplest elements in the constitution of actual entities agrees absolutely with the general principles according to which the notions of modern physics are framed. (PR, 116) He adds that “direct perception [can] be conceived as the transference of throbs of emotional energy, clothed in the specific forms provided by sense” (PR, 116). In this way, Whitehead, like Strawson, locates the coordinates of “experience” entirely within the natural world described to us by physics, even though such experience cannot itself be accounted for by physics. This is why subjective consciousness is spectral and unqualifiable, but nonetheless entirely actual. How is this possible? The next step in the argument is taken by Sam Coleman, who radicalizes Nagel’s formulation in the “Bat” essay. Nagel himself proposes the “What is it like?” question as a kind of test, a way of determining whether or not an entity is conscious. It is evidently “like something” to be a bat, but for Nagel it might well not be like anything at all to be a rock. Coleman, however, transforms Nagel’s epistemological criterion into a foundational ontological principle. Coleman argues that “absolute what-it-is- likeness” does not just apply to living things in particular; rather, it must lie “at the heart of ontology.”43 Following Bertrand Russell and Arthur Eddington, Coleman suggests that “the concepts of physics only express the extrinsic natures of the items they refer to. . . . The question of their intrinsic nature is left unanswered by the theory, with its purely formal description of micro ontology.”44 That is to say, contemporary physics—no less than the physics of Lucretius—only “tells us what an atom looks like to others”: it describes an atom in terms of its extrinsic, relational qualities. The
Steven Shaviro
study of these relations is what physical science is all about. But Lucretian, nor Newtonian, nor modern (relativistic and quantum) physics has ever pretended to tell us what an atom actually is, intrinsically, for itself. And this is the gap that panpsychism today seeks to fill—just as Leibniz sought to fill a similar gap in the physics of Newton. Coleman claims, therefore, that “the essence of the physical . . . is experiential”; all of the causal interactions tracked by physics must necessarily involve, as well, “the doings of intrinsically experiential existents: causality as described by physics, as currently conceived, captures the structure of these goings on, but leaves out the real loci of causal power.”45 In other words, physical science gives us true knowledge of the world, but this knowledge is exclusively external, structural, and relational. Physics can help me to know what someone else is thinking; but it is powerless to explain what I am thinking. And the most hard-edged contemporary philosophy of science indeed insists upon this distinction. For James Ladyman and Don Ross, the lesson of contemporary physics is that “there are no things; structure is all there is.” Physical science can only describe relational properties. Ladyman and Ross tell us that we must “give up the attempt to learn about the nature of unobservable entities from science.” They conclude that, because “intrinsic natures” are not known to science, they simply do not exist. As far as Ladyman and Ross are concerned, nothing has an irreducible inside; to posit one is to make an illegitimate inference as a result of what they scornfully describe as “prioritizing armchair intuitions about the nature of the universe over scientific discoveries.” In Ladyman and Ross’s vision, physical science is exclusively relational; anything not determined by these relations must be eliminated.46 Anyone who has followed recent discussions in speculative realism is likely to be aware of Graham Harman’s critique of Ladyman and Ross.47 But Harman’s is only one of many voices to have found its sort of “radical relationism” untenable and to insist instead that entities must have intrinsic natures of some sort. William Seager summarizes various forms of the “intrinsic nature” argument and claims that, without it, one cannot offer anything like an adequate treatment of the problem of consciousness—much less maintain panpsychism. “We are forced to postulate an intrinsic
Consequences of Panpsychism
ground for the relational, structural, or mathematical properties” of which physics informs us—even if physics itself cannot provide this ground.48 Seager and Harman alike insist, rightly, that entities must have something like intrinsic properties, because relations cannot exist without relata.49 Therefore, as Harman puts it, “the world swarms with individuals.”50 Whitehead, for his part, says much the same thing: “The ultimate metaphysical truth is atomism. The creatures are atomic” (PR, 36). If we are to account for this irreducible “plurality of actual entities” (PR, 18), and if we are to take seriously Coleman’s demand that actually existing things (from neutrinos through houses and trees and on to galaxies) be understood intrinsically, as “real loci of causal power,” then the crucial ontological question is the following: how we are to identify these individuals, or ultimate relata? In just what does a thing’s intrinsic nature consist? The answer, I think, can only be that all entities have insides as well as outsides, or first-person experiences as well as observable third-person properties. A thing’s external qualities are objectively describable; but its interiority is neither a something nor a nothing, as Whitehead puts it: In the analysis of actuality the antithesis between publicity and privacy obtrudes itself at every stage. There are elements only to be understood by reference to what is beyond the fact in question; and there are elements expressive of the immediate, private, personal, individuality of the fact in question. The former elements express the publicity of the world; the latter elements express the privacy of the individual. (PR, 289) Everything in the universe is both public and private. A neutrino is extremely difficult to detect: it is only affected by the weak nuclear force, and even then, its presence can only be inferred indirectly, through the evidence of its rare interactions with atomic nuclei. Nonetheless, this is enough to define the neutrino as an interactional and relational entity, or what Whitehead calls “a public datum” (PR, 290). The neutrino cannot exist in the first place, apart from the fluctuations of the quantum fields within which it
Steven Shaviro
is so elusively active. At the same time, we must also conceive the privacy of the neutrino, its status as an “unobservable entity” with its own intrinsic experiencings, strange as that might seem. For it is indeed “like something” to be a neutrino. Now, Graham Harman claims that all objects are “withdrawn” from access. As far as I can tell, this withdrawal is nothing more (but nothing less) than the “what-is-it-likeness,” or private interior, of a thing that is also outwardly public and available. My problem with Harman is that he seems to me to underestimate this latter aspect. “Things exist not in relation,” Harman writes, “but in a strange sort of vacuum from which they only partly emerge into relation.”51 This necessarily follows, he argues, from the fact that an object can never be equated with, or reduced to, our knowledge of it: Let’s imagine that we were able to gain exhaustive knowledge of all properties of a tree (which I hold to be impossible, but never mind that for the moment). It should go without saying that even such knowledge would not itself be a tree. Our knowledge would not grow roots or bear fruit or shed leaves, at least not in a literal sense.52 The example is a good one, and Harman indeed scores a point here against the exclusively “structural realism” of Ladyman and Ross. But what leads Harman to assume that one entity’s relation with another entity is constituted and defined by the knowledge that the first entity has of the second entity? Such an approach reduces ontology to epistemology. In fact, knowledge is just one particular sort of relation—and not even an especially important one, at that. Most of the time, entities affect other entities blindly, without knowledge playing a part at all. To cite one of Harman’s own favorite examples: when fire burns cotton, it only encounters a few of the properties of the cotton. In the course of the conflagration, “these objects do not fully touch one another, since both harbor additional secrets inaccessible to the other, as when the faint aroma of the cotton and the foreboding sparkle of the fire remain deaf to one another’s songs.”53 That is to say, the cotton has many qualities—like its texture, its aroma, and its color—that the fire never comes to “know.” Harman therefore
Consequences of Panpsychism
concludes that “one object never affects another directly, since the fire and the cotton both fail to exhaust one another’s reality”; or again, “fire does not exhaust the reality of cotton by burning it.”54 Now, I cannot disagree with the epistemological argument that Harman is making here. I find it legitimate for him to describe the interaction of fire with cotton in the same way as he does the interaction of a human mind with either the fire or the cotton; and I agree with him that neither the mind nor the fire apprehends, or “knows,” all of the qualities of the cotton. And yet, that is not the entire story. For there is a level of being beyond (or beneath) the epistemological one. As the cotton is burned, even those of its properties to which the fire is wholly insensitive are themselves also altered or destroyed. That is to say, fire affects even those aspects of the cotton that it cannot come to “know.” And such is the case with all interactions between entities, when one thing affects, or is affected by, another. So, while I agree with Harman that the encounter between fire and cotton does indeed involve a sort of limited knowledge, I do not think that this dimension of the encounter is in any sense definitive. Whitehead reminds us that the inner and outer, or private and public, aspects of an entity always go together. “There are no concrete facts which are merely public, or merely private. The distinction between publicity and privacy is a distinction of reason, and is not a distinction between mutually exclusive concrete facts” (PR, 290). Whitehead also makes this “distinction of reason” between public and private in temporal terms. Each actual occasion occupies a particular position within the flow of time; for it is causally dependent upon the other occasions in its light cone that have preceded it. However, “contemporary events . . . happen in causal independence of each other. . . . The vast causal independence of contemporary occasions is the preservative of the elbow-room within the Universe. It provides each actuality with a welcome environment for irresponsibility” (AI, 195). In the thick duration of its coming-to-pass, each actual entity enjoys the freedom of its own inner experience. It feels, in a way that is scarcely expressible. The “withdrawal” of objects can have no other meaning. At the same time, each actual entity is open to causal influences: it has been shaped by the influence of other entities that preceded it, and it
Steven Shaviro
will itself go on to exert causal influence upon other entities that succeed it. In this way, grounded in the past and reaching toward the future, every actual entity has an immense capacity to affect and to be affected: this is what defines its outward, public aspect. In this sense, relationalism is true. No entity is fully determined, or entirely defined, by its relations; “there is nothing in the real world which is merely an inert fact” (PR, 310). But it is equally the case that no entity is altogether free from the web of influences and affections, extending thorough time, that are its very conditions of existence. “All origination is private. But what has been thus originated, publically pervades the world” (PR, 310). This is a situation that can be read in both directions. As the Stoics observed so long ago, I am inwardly free and outwardly in chains. But it is equally accurate for me to say that I am inwardly isolated and imprisoned, while outwardly I am able to make affiliations and pursue enlivening relations. Panpsychism is the recognition that this duality of privacy and relationality is not just a human predicament, but the condition of all entities in the universe. As a coda, I quickly outline some consequences of what I have said. • Evidently, I have no proof of the inner life of a neutrino. But strictly speaking, I also have no proof of the inner life of a bat or a cat, or indeed of another human being. This absence of proof is unavoidable, given the spectral nature of inner, private experience. But because I nevertheless do acknowledge and respect the inner lives and values of other human beings, I can potentially do the same with other entities of all sorts. What’s needed, perhaps, is an extension of sympathy. • Panpsychism allows us to overcome “the correlation of thinking and being” without the necessity of eliminating the former. We step outside the correlationist circle when we acknowledge both that other entities think and that their thinking is, even in principle, inaccessible to us. • My own panpsychist position is in fact quite close to Harman’s by virtue of being its symmetrical inverse. Harman argues that things think when they interact, or enter into
Consequences of Panpsychism
relations, with other things; he rejects universal panpsychism because he argues that entities are “dormant” and unthinking whenever they are entirely withdrawn and free of all relations. Although (unlike Harman) I do not believe that absolute non-relationality is possible, the real difference between us is that I attribute mentality precisely to the “withdrawn” or entirely private aspect of entities rather than to their public, relational side. • Experience, or mentality, or spectral interiority is always a matter of what Whitehead calls “feeling” before it is a matter of cognition. • Sentience is a more basic category than life or vitality. Life is possible because there is already sentience, rather than the reverse. (A difference between Whitehead and Deleuze, as noted by Brassier.)55
Notes 1. Rudy Rucker, “Panpsychism Proved,” in Futures from Nature, ed. Henry Gee (New York: Macmillan, 2007), 250. 2. David Skrbina, Panpsychism in the West (Cambridge, Mass.: MIT Press, 2005). 3. Thomas Nagel, Mortal Questions, Canto ed. (Cambridge: Cambridge University Press, 1991; first published 1979), 181. Citations are to the 1991 edition. 4. Rudy Rucker, “Mind Is a Universally Distributed Quality,” Edge, www.edge.org/q2006/q06_3.html#rucker. 5. Colin McGinn, “Hard Questions: Comments on Galen Strawson,” in Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?, ed. Anthony Freeman (Exeter, U.K.: Imprint Academic, 2006), 93. 6. Galen Strawson, “Realistic Monism: Why Physicalism Entails Panpsychism,” in Consciousness and Its Place in Nature, 25. 7. See Anthony Freeman, ed., Consciousness and Its Place in Nature; David Skrbina, Mind That Abides: Panpsychism in the New Millennium (Amsterdam: John Benjamins, 2009); and Michael Blamauer, ed., The Mental as Fundamental: New Perspectives on Panpsychism (Piscataway, N.J.: Transaction Books/Rutgers University, 2012). 8. Robert Brandom, Reason in Philosophy: Animating Ideas (Cambridge, Mass.: Belknap Press of Harvard University Press, 2009), 148. See also Jon Cogburn, “Brandom on (Sentient) Categorizers versus (Sapient)
Steven Shaviro
Inferers,” Jon Cogburn’s Blog, March 6, 2010, http://drjon.typepad.com/ jon_cogburns_blog/; and Pete Wolfendale, “Brandom on Ethics,” Deontologistics: Researching the Demands of Thought blog, February 27, 2010, http://deontologistics.wordpress.com/. 9. Pete Wolfendale, “Not So Humble Pie,” Deontologistics, http:// deontologistics.files.wordpress.com/2012/04/wolfendale-nyt.pdf. 10. Anthony Trewavas and František Baluška, “The Ubiquity of Consciousness,” EMBO Reports 12, no. 12 (November 18, 2011): 1221–25. See also Steven Shaviro, ed., Cognition and Decision in Nonhuman Biological Organisms (Ann Arbor, Mich.: Open Humanities Press, 2011). 11. Alfred North Whitehead, Adventures of Ideas (New York: Free Press, 1967; first published 1933), 132. Citations are to the 1967 edition and are hereafter indicated parenthetically in the text as AI. 12. Alfred North Whitehead, Process and Reality (New York: Free Press, 1978; first published 1929), 15. Citations are to the 1978 edition and are hereafter indicated parenthetically in the text as PR. 13. Graham Harmon, Prince of Networks: Bruno Latour and Metaphysics (Melbourne: re.press, 2009), 152–53. 14. Levi Bryant, The Democracy of Objects (Ann Arbor, Mich.: Open Humanities Press, 2011), 143. 15. Ibid., 141. 16. Gilbert Simondon, L’individuation à la lumière des notions de forme et d’information (Grenoble: Éditions Jérôme Million, 2005); and Charles Olson, “The Kingfishers,” in The Collected Poems of Charles O lson: Excluding the Maximus Poems, ed. George Butterick (Berkeley: University of California Press, 1987), 86. 17. Alfred North Whitehead, Modes of Thought (New York: Macmillan, 1938), 109, 111. Hereafter indicated parenthetically in the text as MT. 18. Ludwig Wittgenstein, Tractatus Logico-Philosophicus, trans. D. F. Pears and B. F. McGuinness (London: Routledge and Keegan Paul (RKP), 1961; first published 1922), sec. 6.41. Citations are to the 1961 edition and appear hereafter parenthetically in the text. 19. Alfred North Whitehead, Science and the Modern World (New York: Free Press, 1967; first published 1925), 90. Citations are to the 1967 edition. 20. Quentin Meillassoux, After Finitude: An Essay on the Necessity of Contingency, trans. Ray Brassier (London: Continuum, 2008); and Ray Brassier, Nihil Unbound: Enlightenment and Extinction (New York: Palgrave Macmillan, 2009). 21. Meillassoux, After Finitude, 5. 22. Thomas Nagel, Mortal Questions, Canto ed. (Cambridge: Cambridge University Press, 1991; originally published 1979), 166, 170. Cita-
Consequences of Panpsychism
tions are to the 1991 edition. “What Is It Like to Be a Bat?” originally appeared in Philosophical Review 83, no. 4 (October 1974): 435–50. 23. Harman, Prince of Networks, 225. 24. Ian Bogost, Alien Phenomenology, or What It’s Like to Be a Thing (Minneapolis: University of Minnesota Press, 2012). 25. Nagel, Mortal Questions, 191. 26. Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe (New York: Macmillan, 1953), #298. Hereafter cited as PI in parenthetical citations in the text. 27. Daniel Dennett, “Quining Qualia,” in Consciousness in Modern Science, ed. A. Marcel and E. Bisiach (Oxford: Oxford University Press, 1988), http://ase.tufts.edu/cogstud/papers/quinqual.htm. 28. Nagel, Mortal Questions, 191. 29. Ibid., 191–92. 30. Here I am drawing on—and expressing disagreement with—Pete Wolfendale, “Phenomenology, Discourse, and Their Objects,” Deontologistics, December 20, 2009, http://deontologistics.wordpress.com/. 31. David Chalmers, “Facing Up to the Problem of Consciousness,” Journal of Consciousness Studies 2, no. 3 (1995): 200–219, http://consc.net/ papers/facing.html; and David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (New York: Oxford University Press, 1996). 32. Strawson, “Realistic Monism,” 3, 4, 7. 33. Ibid., 4, 11. 34. See Karen Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham, N.C.: Duke University Press, 2007); Jane Bennett, Vibrant Matter: A Political Ecology of Things (Durham, N.C.: Duke University Press, 2010); and Manuel De Landa, Intensive Science and Virtual Philosophy (London: Continuum, 2002). 35. Strawson, “Realistic Monism,” 13–14. 36. Ibid., 20. 37. Ibid., 18. Sam Coleman, “Being Realistic: Why Physicalism May Entail Panexperientialism,” in Consciousness and Its Place in Nature, 40. 38. Strawson, “Realistic Monism,” 18. 39. Meillassoux, After Finitude, 65, 71. 40. Graham Harman, Quentin Meillassoux: Philosophy in the Making (Edinburgh: Edinburgh University Press, 2011), 182–87. 41. Ibid., 179. 42. Strawson, “Realistic Monism,” 25. 43. Sam Coleman, “Mind under Matter,” in Mind That Abides, 97. 44. Coleman, “Being Realistic,” 52. 45. Ibid.
Steven Shaviro
46. James Ladyman and Don Ross, with David Spurrett and John Gordon Collier, Every Thing Must Go: Metaphysics Naturalized (Oxford: Oxford University Press, 2007), 130, 92, 10. 47. Graham Harman, “I Am Also of the Opinion That Materialism Must Be Destroyed,” Environment and Planning D: Society and Space 28, no. 5 (2010): 772–90. 48. William Seager, “The ‘Intrinsic Nature’ Argument for Panpsychism,” in Consciousness and Its Place in Nature, 135. 49. Ibid., 140. Harman, “I Am Also of the Opinion,” 786. 50. Harman, “I Am Also of the Opinion,” 788. 51. Harman, Prince of Networks, 132. 52. Harman, “I Am Also of the Opinion,” 788. 53. Graham Harman, Guerrilla Metaphysics: Phenomenology and the Carpentry of Things (Chicago: Open Court, 2005), 170. 54. Ibid., 188. Harman, Prince of Networks, 143. 55. See Ted Chiang, The Lifecycle of Software Objects (Burton, Mich.: Subterranean Press, 2010).
9 3 0
Artfulness Erin Manning Thanks to art, instead of seeing a single world, our own, we see it multiply. . . . —Gilles Deleuze, Proust and Signs
Part 1. The Art of Time The word art in German (die Art) continues today to carry one of the earliest meanings of the term: “manner” or “mode.” In the early thirteenth century, art was still connected to this qualifying notion, attuned less to an object than to a skill or craft of learning.1 A way of learning. To speak of a “way” is to dwell on the process itself, on its manner of becoming. It is to emphasize that art is before all else a quality, a difference in kind, a technique, that maps the way toward a certain attunement of world and expression. Art, understood along these terms, is not yet about an object, about a form, or a content. It is still on its way, in its manner of becoming. It is intuition, in the Bergsonian sense. As Henri Bergson defines it, intuition is the art—the manner—in which the very conditions of experience are felt. Beyond the state (and the status quo), across the force of the actual, intuition touches on the decisive turn within experience in the making that activates a difference within time’s durational folds: intuition activates the proposition at the heart of the as yet unthought. In its feeling-forth of future potential, intuition draws on time. It touches the sensitive nerve of time. Yet intuition is not duration per se. “Intuition is rather the movement by which we emerge from our own duration, by which we make use of our own duration to affirm and immediately to recognize the existence of other durations.”2 Intuition is the relational movement through which the present begins to coexist with its futurity, with the quality or 9 45 0
Erin Manning
manner of the not-yet that lurks at the edges of actual experience.3 This is art: the intuitive potential to activate the future, to make the middling of experience felt where futurity and presentness coincide, to invoke the memory not of what was, but of what will be. Art, the memory of the future. Duration is lived only at its edges, in its commingling with actual experience. In the time of the event, what is known is the mobility of experience, experience in the making. To actually measure the time of the event, a backgridding activity is necessary. This activity “after the fact” tends to deplete the event-time of its middling, deactivating the relational movement that was precisely event-time’s force. Backgridded, experience is reconceived in its poorest state: out of movement. Out of movement is out of act. For Alfred North Whitehead, all experience is in-act, variously commingling with the limits of the not-yet and the will-have-been. Experience is (in) movement. Anything that stands still—an object, a form, a being—is an abstraction (in the most commonsense notion of the term) from experience. It is not the image of the past (for the past cannot be differentiated from the in-act of the future-presenting), but a cutout from a durational field already elsewhere. “Object and objective denote not only what is divided, but what, in dividing, does not change in kind.”4 Everything changes in kind. This is the paradox: for there to be a theory of the “object,” the “object” has to be conceived as out of time, relegated beyond experience. For in experience, what we call an object is always, to some degree, not-yet, in process, in movement. We know it not in its fullness, in its ultimate form, but as an edging into experience. What resolves in experience is not, as Whitehead would argue, first and foremost a chair, but the activity of “sitability.” It is only after the fact, after the initial entrainment the chair activates, that the movement into the relational field of sitability, that the chair as such is ascertained, felt in all its “object-like” intensity. But even here, Whitehead would argue, what stands out is not its three-dimensional form, but its quality of form-taking. Form is held in abeyance. “Chair” is not an object so much as a feeling. To hold in abeyance does not mean that the form is contained
in an unreachable elsewhere. The object is this abeyance—the not- quite-form that cannot be separated out from the milieu, from the field that it coactivates. Whether sitability or the plushness of comfort, the experience of chair is never a finite one, and it is never contained by the dimensions of the object (or the subject) itself. The art of time makes this more-than of the object felt, and it does so by activating the differential of time in the making, the difference between what was and what will be. For all actualization is in fact differentiation. The in-act is the dephasing of the process toward the coming into itself of an occasion of experience. In this dephasing, the differences in kind between the not-yet and the will-have-been are felt, but only at the edges of experience. They are felt in the moving, activating the more-than. To feel in the moving, to activate the more-than that coincides both with object likeness and relational fielding, is to experience the nonlinearity of time where nothing is: everything acts. There is no succession in the metric sense. To act is to activate as much as to actualize, to make felt the schism between the virtual folds of duration and the actual openings of the now in its quality of passage. On its way. The emphasis on the ontogenesis of time is important: the quality of the way depends on there not being a notion of time or space that preexists the event of expression that art creates. This is not to deny the past, but to say instead that what exists in experience is not a linear timeline but “various levels of contraction.”5 The manner of existence is how it contracts, dilates, expands. The manner of experience is its quality, and outside of this quality there is nothing. The “objective” does not preexist it, and does not justify it. Quality is how the world comes to expression, how it is felt, how it is lived, and how it does its living. How the quality quantifies is analogous to the question of how time is counted: it does so after the fact, abstracted from the force of movement-moving. The quality, the manner, the art of time is, as I suggested above, intuition. Intuition is a good term here because it reminds us that experience demands a taking, a making of time. It insists: Take time before stopping at the object, before defining art as genre, as historical marker, as form or content. The art of time is not about definitions so much as about
Erin Manning
sensations, about the affective force of the making of time where “we are no longer beings but vibrations, effects of resonance, ‘tonalities’ of different amplitudes.”6 Nor is the art of time about economy, about marking the worthiness of a given experience, the usefulness of time spent. “We must become capable of thinking . . . change without anything changing.”7 Duration as time felt in the beyond of apparent change, independently of any notion of linear succession. Intuition never stems from what is already conceived. It introduces into experience a rift in knowing, a schism in perception. It forces experience to the limits not only of what it can imagine, but what it has technically achieved. For intuition is never separate from technique. It is a rigorous process that consists in pushing technique to its limit, revealing its technicity—the very outdoing of technique that makes the more-than of experience felt. Bergson calls it a long encounter, a mode of work, that has nothing to do with synthesis or recognition. Intuitively, a memory of the future is crafted, a memory for the future. A memory of the future is the direct experience of time’s differential. “It is a question here of something which has been present, that has been felt, but that has not been acted.”8 A memory not only of and for the human: a memory active in duration itself, a memory inseparable from duration’s movement, a movement always relational. Not only of and for the human because duration is not for the human—“Duration does not attach itself to being—or to beings—it coincides with pure becoming.”9 A memory for the future activates the smallest vibrational intervals—human and nonhuman—that lurk at the interstices of experience. It intuits them, activating their force of becoming such that their movements begin to make a difference. This is intuition: the captivation of the welling forces that activate the dephasing of experience into its more-than. A memory of the future because this more-than cannot quite be captured, cannot be held within the matrix of representation. It is an attunement, an affective tonality, a sensation of what has already come to be. Déjà-felt. Bergson calls the mechanism by which this future-feeling arises “sympathy.” Sympathy not “of” the human but “with” experience in the making. “We call intuition that sympathy by which we are
transported to the interior of an object to coincide with what it has that is unique and, consequently, inexpressible.”10 Sympathy as the motor of excavation that allows the movement to be felt, that opens experience to the complexities of its own unfolding. What is intuited is not matter per se: “There is therefore no intuition of matter, of life, of society in and of themselves, that is, as nouns.”11 There is intuition of forces, of qualities that escape the superficial interrogation of that which has already taken its place. Intuition is always and only compelled by what is on its way. Deleuze might speak of intuitions or the art of time as essence. In his early work on Proust, Deleuze speaks of essence as the force of the as-yet-unfelt in experience. Essence is here everything it usually isn’t: it is not truth, or origin. Essence is the ultimate difference in kind. Linked to art, essence for Deleuze speaks of the unquantifiable in experience, of that which exceeds the equivalence between sign and sense. “At the deepest level, the essential is in the signs of art.”12 The signs of art do not convey meaning, they make felt its ineffability. The essential, the sign that does not have sense so much as it creates, or undoes sense, is a species of time, a durational fold in experience. This quality of time—the art of time—is not abstracted from its coming-into-formation. The field it creates is analogous to its time, a time not of change or succession—a time of difference in itself. Time—as Deleuze says, le temps—is plural. A plurality of time in time multiplies experience in the now. This, Deleuze suggests, is what art can do. Art not as the form an object takes, but as the manner in which time is composed. A composition that has effects, for it activates the difference at the heart of being: what is activated here is not a subject or an object, but a field of expression through which a different quality of experience is crafted. What art can do is to bypass the object as such and make felt instead the dissonance, the dephasing, the complementarity of the between, of what Deleuze calls the “revelatory” or refracting milieu.13 The refraction produces not a third object but a quality of experience that touches the edgings into form of the material’s intuition. Matter intuits its relational movement, activating from within its qualitative resonance an event that makes time for that which cannot quite be seen but is felt in all its uncanny difference.
Erin Manning
Tuning into the art of time involves work. It requires an attentiveness to the field in its formation. “An artist ‘ages’ when ‘by exhaustion of his brain’ he decides it is simpler to find directly in life, as though ready-made, what he can express only in his work, what he should have distinguished and repeated by means of his work.” The mechanical cannot be confused with the art of time, for the mechanical repeats what has already come to pass, playing the tune of Friedrich Nietzsche’s last men, the men who dwell in the swamp of time past. Intuition, in its amplification of the technicity of a process, in its capacity to think the more-than in a memory of the future, forecasts what Deleuze calls “an original time” that “surmounts its series and its dimensions,” a “complicated time” “deployed and developed,” a time devoid of preconceptions, of directions, a time that makes its own way.14 The art of time makes apparent the complexity of relation, relation as the field in which creativity dwells, creativity in the name of potential, of the more-than. For relation is not solely between- two. Relation is the force that makes felt the how of time as it co- composes with experience in the making. It is out of relation that the solitary is crafted, not the other way around: relation is what an object, a subject is made of. This is what David Lapoujade means when he writes that “at the heart of the human there is nothing human.” The world is made of relation activated by intuition, felt sympathetically on the edges of experience, touching its nonhuman tendencies. “We must move beyond the limits of human experience, sometimes inferior, sometimes superior, to attain the pure material plane, the vital, social, personal, spiritual planes across which the human is composed.”15 What is at stake in the intuiting of the more-than that art requires is not the requalification of subject and object, artist and work, but the shedding of all that preexists the occasion in which the event takes place. Only this, Lapoujade suggests, makes the unrealizable realizable. For we must be clear: the memory of the future, the art of time—these are not quantifiable measures. These are speculative propositions, forces within the conceptual web that lurks on the edges of the thinkable. They rejoin Michael Taussig’s provocative suggestion that we ask unanswerable, impossible questions such
as “What color is the sacred?” that we engage in a sayability that exceeds the known and the knowable.16 The art of time is the proposition art can make to a world in continual composition. Instead of immediately turning to form for its resolution, it can ask how the techniques of relation become a conduit for a relational movement that exceeds the very form- taking art so often strives toward. Instead of stalling at the object, it can explore how the forces of the not-yet co-compose with the milieu of which they are an incipient mode. It can inquire into the collective force that emerges from this co-composition. It can develop techniques for intuiting how art becomes the basis for creating new manners, new modes of collaboration, human and non human, material and immaterial. It can touch on the technicity of the more-than of art’s object-based propositions. It can ask, as Deleuze does throughout his work, how the collective iteration of a process in the making itself thinks. It can ask what forces it to think, to become. It can inquire into the forces that do violence to the act of making time, and it can create with the unsettling milieu of a time out of joint, intuiting its limits, limits that often have little to do with form. And in so doing, it can create a time for thought “that would lead life to the limit of what it can do” complicating the very concept of life by pushing life “beyond the limits that knowledge fixes for it.”17 Art, then, as a manner, a technique, a way. This way is relational. It is of the field, in the milieu. Art as the intuitive process for activating the relational composition that is life-living, for creating a memory of the future that evades, that complicates form. The art of time: making felt the rhythm of the differential, the quality of relation. It is not a question of slow time, or quick time, of lingering or speeding. It is a question of moving experience beyond the way it has a habit of taking, of discovering how the edges of life-living commingle with the forces of that which cannot yet be seen, the “polymorphous magical substance that is the act and the art of seeing.”18 The art of time involves taking a risk, no doubt, but risk played out differently, at the level not of identity or being: risk of losing our footing, risk of the world losing its footing, on a ground that moves and keeps moving. Here, where movement always predates form, where expression remains lively at the interstices of the ineffable, the field of relation itself
Erin Manning
becomes “inventor of new possibilities of life,” possibilities of life we can only intuit in the art of time.19
Part 2. The Art of Participation My art practice is directly concerned with the art of time. Over the past decade, this has expressed itself chiefly through an exploration of how to create generative lures toward a participatory process that is capable of crafting emergent collectivities. Following from William Forsythe’s notion of the choreographic object,20 in which I see the object not as a stable form but as a lure or objectile, I have been concerned with what I have called the art of the event or event-time.21 Both in the SenseLab and through my own artwork,22 I have been concerned with how to create events that build on conditions that creatively delimit the participatory call even as they keep it open to surprise and invention.23 I think of delimitation—or what Brian Massumi and I have called “enabling constraints”24—at once as a boundary (a structured improvisation) and as a relational platform that works as an invitation to activate the work’s outside or openness, openness as an ethos of event-based hospitality rather than as a melting pot of common denominators. As I have suggested in my exploration of choreography as mobile architecture,25 or with my concept of the dance of attention, it is not about creating an “easy” space, but about crafting an ease of entry into a complex environment itself always under modulation. At the 2012 Sydney Biennale, I presented an ongoing participatory composition titled Stitching Time. It is to this artwork that I now turn my attention, focusing in particular on how the art of time co-composes with the “art” of participation. Stitching Time is a large-scale installation made up of a textile collection called Folds to Infinity (Figure 3.1), which comprises two thousand serged, buttonholed, magneted, and buttoned pieces of fabric. Folds to Infinity’s proposition is to create an ethos of collective time that is tuned to foldings that generate both garments and environments. Stitching Time was conceived as a site-conditioned work that attempted to bring into relational play both the environmental and the clothing-related aspects of the Folds to Infinity project. It
Figure 3.1. Detail from Folds to Infinity (2003–13), Erin Manning, at 18th Biennale of Sydney (2012). Photograph courtesy of Leslie Plumb.
was constructed with a double logic: on the one hand, using half the fabric, we created a sculpture that we hoped would activate the perception of color through light, and on the other, using baskets and other participatory cues, we invited participants to compose garments. The difference from earlier iterations of the work (particularly the series titled Slow Clothes) was that the sculpture, which covered almost three thousand square feet, was not malleable or “interactive” in the strict sense of the word. This time, because of the pressure of keeping the exhibition up for three months, the intention was not for the participants to fashion the environment using the sculptural surround, but for them to participate perceptually in its co-composition with the movement of light throughout the space. From the open invitation of previous exhibitions where dressing included dressing the environment, we narrowed the hands-on interactivity to only garment-making. But this does not mean that we only considered the hands-on component participatory: the field of light and color occasioned by the sculptural component was conceived as a choreographic object
Erin Manning
that would allow a movement-based engagement with the changing light qualities of the space. This sculptural/perceptual experience was facilitated by a complex topology of transparent netting installed by artist and architect Samantha Spurr (Figure 3.2). The role of the netting was to provide a field for movement, both physical and perceptual, for the netting both reflected light and created passages into the color fields: sometimes hung overhead laying bare large areas of open floor space, and sometimes delimiting the space by curving across it and landing, the netting became an arena of complex movement potential from open vistas to curving small enclosures, from areas that invited a lean or a duck to areas that breathed with the height of the twenty-foot ceiling. With the netting, and with a complex stringing of metal wire to which the fabric was connected magnetically, an experience of seeing-through-light was mapped to the movements of the sun throughout the day—from early morning seeing-through-blue to late afternoon seeing-through-orange-and-red (Figure 3.3). We hoped that, moving across the space’s fourteen large windows, light might make color felt such that color itself would become a modality of perception.26 The hope was to make felt the materiality of perception, creating not only physical opportunities for complex mobility in the space, but also conceptual mobilities, or movements of thought. This was facilitated by there being no single vista: the perceptual field was architected to be as complex as the physical field, proposing a co-composition between perception and displacement. The proposition: to perceive is already to have (been) moved. Through this experience of seeing through light, color could be experienced immersively. This was particularly palpable when the participant looked through fabric sculptures connected to the netting, seeing the vista of saturated color as though submerged underwater.27 This co-composition of perception and movement sought to foreground the ethos of time built into the concept of the work. With each of the two thousand pieces cut, sewn, and composed individually (a process that took approximately five hours per piece of fabric), and with many of the pieces sewn over weekly sewing circles around conversation and communal soup, Stitching Time
Figure 3.2. Stitching Time (2012), Erin Manning, at 18th Biennale of Sydney (2012). Photograph courtesy of Brian Massumi.
was before all else an exhibition of time shared and time given. By bringing together a sculptural sense of time with the actual time given by the collaborators in the space to assist with the creation of garments (we were present for the duration of the work, from June to September 2012), the hope was to make felt how participation is always a gift of time, be it hands-on experimentation or the experience of how perceptual shifts are occasioned by time itself. While composing the space, much attention was given to subtraction. The tendency to fill the space was continuously counteracted by the realization that it was important to allow air and light to move through. Physical openings in the space, created by wafts of transparent fabric or densities of netting, worked to accentuate the potential for movement, both perceptual and physical, inviting the participant to linger, to sidle into or to wander comfortably through a variety of environments of color, some of them easy to reach and others more complex, inciting perhaps a slowing down, a bending or a curving. This invitation to linger was predicated not simply on the slowness of movement, but on its complexity.
Erin Manning
To linger was to move at various speeds and slownesses, in different rhythms, a durational mobility facilitated by the lure of a chromatic arrangement, the color itself composing with the movement of light, from the early morning blues and greens to the warm reflective browns and deep oranges of the morning light, to the yellows and transparent golds of the afternoon light to the reds of sunset. Although to some this sculptural/perceptual aspect of the work may seem nonparticipatory (which may explain why some participants were drawn to alter the sculptural installation), for me this was an intrinsically relational environment. Like most kinds of art, it was designed to activate a field of relation, inviting the participant to engage with a collective refashioning of space-time. This, an activity that sunsets inspire, as do fall walks in the woods surrounded by the colored leaves, was collective in the sense that movement was activated in an ecology that included but also exceeded me or you, movement—of light, of perception, of sound and smell—always in excess of this or that human body. To engage with
a sculpture is not to be within a field of limited activity. It is, at its best, to be moved in a vista where the intensity of engagement cannot be mapped to a single object or body. During the first month of the exhibition, much of our attention was tuned to keeping the light sculpture in place. We daily wondered whether this gesture should be encouraged, whether we should simply allow the work to fall to the ground and lose its quality of seeing-through-color, and whether it was fetishistic to try to hold to the sculpture that I built so carefully over the period of a month before the Biennale began. How important to the experience might a hands-on engagement with the sculpture prove to be? Was the participant engaging more intensely with Stitching Time through the dismantling or rebuilding of the sculpture, or was he or she simply territorializing the field of participation? As time went by, we came to the conclusion that, too often, interaction at this level seemed reduced to a kind of Do It Yourself (DIY) aesthetic without much of a sense of participating in the perceptual composition of the space. With hundreds of participants in the space daily, this kind of quick interaction with the sculptural component of the work tended to unfold this way: A participant would enter the space and perhaps sense the inherent mobility of the environment (or perhaps have been told that the space was participatory). For the participant who didn’t take time to encounter the environment, this perhaps translated to the idea that the space itself could be directly altered. Perhaps Stitching Time’s immersive quality signaled an invitation that the sculptural proposition was malleable? Or perhaps the participant tended toward immediate hands-on interactivity because the art event had been mediatized as attractive to families with an emphasis on the “fun” interactive aspect of art and its capacity to “unleash” the “inner artist” in us all?28 Whatever the reason, this kind of entry into the space was focused on “doing,” and this most often with very little attention to the wider ecology of the larger environment. Toni Pape, a collaborator onsite, called this quick interactivity “stillborn” not because it couldn’t be interesting to engage this way—some of the earlier iterations of Folds to Infinity were conditioned for just this kind of participation— but because this kind of engagement with the environment in this
Erin Manning
context rarely occurred in a way that complexified either the vista itself or its engagement with color, light, or time. It tended instead to be more goal directed or task oriented, the participant choosing the fabric closest at hand to hang on the installation in the most rudimentary way. A space that took four weeks to compose was altered in seconds. This was rarely done to harm the space per se—it was done more in an ethos of contribution. (“This is what I thought I was meant to do” was a typical statement.) Participants had sensed the environment’s mutability of transformation, but instead of responding with a sense of care for the potential of transformation, had immediately put themselves at the center. This leads to the question of participation itself. Why assume that an environment affectively modulating must be altered in an instrumental way? Why assume that a collaboration must be material, in the sense of altering an existing work? The issue, it seems to me, is with the concept of attention. Attention, when posited as a purely human quality, makes the human the center of any interaction. Their participation is based on what they see, and how they attend to the work’s transformation. What if instead the dance of attention of the environment itself were what is at stake? What if instead of immediately wanting to change an environment, participation meant attending to how the space reveals itself (to the way it is composed with light, for instance, or how it attends to movement)? What if “adding” to the work were not measured in instrumental terms and were instead conceived in terms of a collective ethos of attention that included every aspect of the ecology—the work itself, the environment, the artworks surrounding it, the air, the light, the sound, and, in this case, the island, the water, the seagulls? If such a complex ecology were at stake, perhaps participation would mean to attend to the incipient composition already at work instead of bringing to the piece a haphazard addition? For almost without fail, that participant who came in and immediately altered the piece did not stay to reflect on the change, instead quickly leaving the work to move on to the next one. Paradoxically, perhaps, it seemed that it was the participant who “did” less who participated more thoroughly, the participant who was less quick to “do.” To enjoy the processual force of time,
it is necessary to take time, and to give time. The participant who senses this will have lingered, less concerned with what the work is “about” than with discovering its resonances. To sense the force of time—the art of time—in any work is to have participated in its process, to have felt what it seeks to give. For Stitching Time, the participant who delved into the art of time was the one with whom we tended, as activators of the work, to share the most unique moments, with whom the conversation unfolded in the most surprising ways. This was the participant with whom the most intricate garments were made. Or with whom no garment was made, and no conversation was had, but whose very presence in the space altered its affective tone in a way that colored the rest of the day. This story raises all kinds of questions, not least of which is how to refrain from making the participant the holder of the key to the work. In my writing, I have often spoken critically of this tendency of the artist to feel mistreated or misunderstood by the participatory public, and have explored the idea that when the work works it is because it has been able to create the conditions for a kind of participation that outdoes its initial proposition. But as I watched the three thousand–plus people per week move through Stitching Time, I wondered about the role of tending in this proposition: How responsible is the public for the tending of a work? How responsible are we collectively for the opening of the work to its outdoing? What is our role in the tending of the way of art, in the art of time?
Limits of Existence As a starting point I think of participation not as something that “happens” to the work (not as something an outside does in response to an already crafted inside) but as that which is stimulated by the work’s own dance of attention. The dance of attention, or a work’s mobile architecture (or “diagram,” in Francis Bacon’s terms29) is felt when the work becomes capable of attuning to the force of its own potential in a way that exceeds its initial proposition. When the work works (or when it “stands up,” as Deleuze and Guattari might say), it creates its own momentum, its own block of sensation, its own field of forces. When this happens, the work
Erin Manning
becomes what it does, and it does so in a continual process of outdoing technique. From the precision of the techniques that create its bedrock, the work evolves into a technicity that unleashes it to a becoming that could not have been mapped in advance. The work comes to life. The art of participation must begin here, it seems to me, where it is no longer a question of subject and object, of artist and participant, but a question of how the work calls forth its own potential evolution. To explore the art of participation, it is necessary to return to a few key issues raised earlier around the notion of the art of time. 1. What is activated by an artwork is not its objecthood. (An object in itself is not art.) Art is the way, the manner of becoming, that is intensified by the coming-out-of- itself of an object. It is the object’s outdoing as form or content. 2. Intuition is the work that sets the process of outdoing on its way. 3. The manner of becoming makes time felt in the complexity of its nonlinear duration. This is an activation of the future—the force of making felt what remains unthinkable (on the edge of feeling). 4. The activation of the manner of becoming is another way of talking about the work’s technicity, or its more-than. This more-than is a dephasing of the work from its initial proposition (its material, its conditions of existence). 5. The relational field activated by the work’s outdoing of itself touches on an ecology that is more than human. The work participates in a worlding that potentially redefines the limits of existence. Limits of existence are always under revision, particularly when confronted with a schema that does not place the human at the center of experience. How a constellation evolves—an artwork-human constellation, or an artwork-environment-artwork constellation— always has an effect, and this effect cannot be abstracted from the question of participation. The art of participation therefore takes
the notion of modes of existence as its starting point, asking how techniques of encounter modify or modulate how art can make a difference, opening up the existing fields of relation toward new forms of perception, accountability, experience, and collectivity. This aspect of the art of participation cannot be thought separately from the political, despite the fact that its political force is not necessarily in its content. This is not about making the form of art political. It is about asking how the field of relation activated by art can affect the complex ecologies of which it is part. To address these issues, I return to the question of sympathy as it arises in Bergson’s work around intuition.
Sympathy Sympathy is not the benevolent act that follows the event. Sympathy is neither the result of, nor a response to, an already-determined action. Sympathy is the vector of intuition without which intuition would never be experienced as such. An event is sympathetic to the force of its intuition such that it opens itself to its technicity. Sympathy is how the event expresses this outdoing. The event has sympathy for its creativity, its capacity to express the novelty (the inexpressibility) intuition has inspired. To repeat, “we call intuition that sympathy by which we are transported to the interior of an object to coincide with what it has that is unique, and, consequently, inexpressible.”30 As Lapoujade emphasizes, it is impossible to think intuition and sympathy as wholly separate from one another, but neither should we consider them as the same gesture. Intuition touches on the differential of a process, and sympathy holds the contrasts in the differential together to express the ineffable. Sympathy is the gesture of making this ineffability felt, allowing for the expression of a certain encounter already held in germ. Where intuition is the force of expression or prearticulation of an event’s welling into itself, sympathy is the way of its articulation. Sympathy is a strange term for this process, so connected is it with the sense of applying a value judgment on a preexisting process. It may therefore not hold the power as a concept to make felt
Erin Manning
the force of what it does, or can do. I use it here as an ally to concepts such as concern and self-enjoyment in Whitehead, concepts that remind us that the event has a concern for its own evolution, and that this concern, the event’s affective tonality, the how of its coming-to-be, is central to any understanding of a more-than- human framework. To make sympathy the driver of expression in the event is to bring care into the framework of an event’s concrescence, to foreground how intuition is a relational act that plays itself out in an ecology that cannot be abstracted from it. Intuition leads to sympathy—sympathy for the event in its unfolding. Without sympathy for the unfolding, the event cannot make felt the complexity of durations of which it is composed. For this is what sympathy is—a tending to the complexity of an intuition that lurks at the very edge of thought where the rhythms that populate the event have not yet moved into their constellatory potential. Outdoing is another complex notion. What is a work in its outdoing, and why is it important for an event to be sympathetic to its outdoing? A work outdoes itself when it begins to participate with its conditions for emergence in a way that exceeds expectations, becoming a work in motion. This mobility is not necessarily spatial in an extensive sense—it is a mobility of thought, of perception, of feeling. For an event to have sympathy with its outdoing means that it is capable of being sensitive to the absolute or immanent mobility of a process—to what may be imperceptible as such but is immediately felt. Sympathy understood this way is neither comforting nor easy. A work may undermine expectation, may push boundaries, may dispel intensity or activate havoc. In the case of Stitching Time, there has certainly been this sense of an unleashing of the unexpected, resulting in many momentary desires to bring it back to the individual, to capture the event as though it were outside, to hold it to expectation, to harness its potential. Ultimately this desire to hold back will always fail, however, for the event is the how of its unfolding, and this how includes all participatory aspects in its relational movement. What is at stake therefore is never how to capture it to stop its flow but how to “allow the passage to the ‘interior’ of realities, to seize them from ‘inside.’ ”31 Sympathy: to outdo the inside, making apparent how the folds hold the germs of intuition.
The Way of Art If the art of time is inextricably linked to art as way, as path, then art and intuition must always be seen as synonyms, for intuition is the fold in experience that allows for the staging of a problem that starts a process on its way, or curbs a process into its difference. This raises the question of where intuition is situated when art lingers on the side of participation. Is participation also intuitive? I would say that where art is mobilized through an intuitive process that crafts and vectorizes the problem that will continue to activate it throughout its life, participation is the sympathy for this process. Participation is the yield in what Raymond Ruyer calls the “aesthetic yield.” It is the yield both in the sense that it gives a sense of direction to a process already underway and that it opens that process to the more-than of its form or content. Aesthetic yield expands beyond any object occasioned by the process to include the vista of expression generated by art as event. This I call artfulness. Artfulness, the aesthetic yield, is about how a set of conditions coalesce to favor what Lapoujade calls a “seizing of the inside” that generates the field of expression we call participation. The art of participation is its capacity to activate the artfulness at the heart of an event, to tap into its yield. Artfulness as the force of a becoming that is singularly attendant to an ecology in the making, an ecology that can never be subsumed to the artist or to the individual participant. Artfulness: the momentary capture of an aesthetic yield in an evolving ecology. All ecologies are more than human. They are as much the breath of a movement as they are the flicker of a light and the sound of a stilling. They are earth and texture, air and wind, color and saturation. In the context of an artwork, artfulness is how the complex relation between intuition and sympathy comes into contact with a worlding that itself expresses the more-than of an ecology in the making. This is not quickly or easily done: Bergson speaks of intuition in terms of the necessity of a long camaraderie engendered by a relationship of trust that leads toward an engagement with that which goes beyond premature observations and preconceived neutralizing facts.32 Intuition is a rigorous process that agitates at the
Erin Manning
very limits of an encounter with the as yet unthought. Artfulness is the sympathetic expression of this encounter. Tapping into the differential, artfulness opens the world to the novelty Whitehead foregrounds—a novelty that is not about the capitalist sense of the new, but about the force of mixtures that produce new openings, new vistas, new complexions for experience in the making. This is artfulness: the world’s capacity to make felt the force of a welling ecology. This welling ecology is not a general fact—it is an intensive singularity, an opening onto an outside that affects each aspect of experience but cannot be captured as such. A fleeting sensation of the potential for the world to differ from itself. A captive attention to an intuition that for each occasion there remains an opening to the unsayable, the unthinkable. And a sympathy for the force of this unthinkability. Artfulness does not belong to the artist, or to art as a discipline. Not every event is artistic, but many events are artful. The distinction made here returns us to the question of the yield and forces us to ask how intuition and sympathy coincide in the everyday. Whether in the artistic process or in the everyday, when there is artfulness it is because conditions have been created that enable not only the art of time, but also the art of participation. For it is only when there is sympathy for the complexity of the problem opened up through intuition that an event can yield to its more- than. When this happens, a shift is felt toward a sense of immanent movement—and the way at the heart of art is felt. It is not the object that stands out here, not the tree or the sunset or a painting. It is the force of immanent movement the event calls forth that is experienced, a mobility in the making that displaces any notion of subjectivity or objecthood. This does not mean that what is opened up is without a time, a place, a history. Quite the contrary: what emerges at the heart of the artful is always singular—this process, this ecology, this feeling. It’s just that its eventness in the now of its immanent mobility in an ecology of practices is as central to it as the complexity and precision of its history. Artfulness is an immanent directionality, felt when a work does its work, when a process activates its most sensitive fold where it is still rife with intuition. This modality is beyond the human. Certainly, it cuts through, merges with, captures and dances with the
human, but it is also and always more than human, active in an ecology of resonances that cannot be mapped strictly onto the human. This is what Bergson means when he speaks of the spirituality of duration—the art process now has its own momentum, its own art of time, and this art of time, excised as it is from the limits of human volition, collaborates to create its own way. The force of art is precisely that it is more than human. Rhythm is key to this process that flows through different variations of human-centeredness toward ecologies as yet unnamable. Everywhere in the vectorizations of intuition and sympathy are durations as yet unfolded, expressions of time as yet unlived, rhythms still unlivable. This is what makes an event artful—that it remains on the edge, at the outskirts of a process that does not yet recognize itself, inventing as it does its own way, a way of moving, of flowing, of stilling, of lighting, of coloring, of participating. For this is how artfulness is lived—as a field of flows, of differential speeds and slownesses, in discomfort and awe, distraction and attention. Artfulness is not something to be beheld. It is something to move through, to dance with on the edges of perception where to feel, to see, and to be are indistinguishable. What moves here is not the human per se, but the force of the direction the intuition gave the event in its preliminary unfolding. Techniques are at work, modulating themselves to outdo their boundedness toward a technicity in germ. Thought, intent, organization, consideration, habit, experience—all of these are at work. But with them is the germ of intuition born of a long and patient process now being activated by a sympathy for difference, a sympathy for the event in its uneasy becoming. It is not the human but the art that has become capable of activating or generating an event-time that invents with the forces that exceed our humanness. To touch on the artful is to touch on the incommensurable more- than that is everywhere active in the ecologies that make us and exceed us. Tweaked toward the artful in the process of making, art becomes a way toward a collective ethos. For the incommensurability it calls forth already holds the collective in germ. From the most apparently stable structure to the most mobile or ephemeral iteration, art that is artful activates the art of participation making
Erin Manning
felt the transindividual force of an event-time that catapults the human into our difference. This difference, the more-than at our core, the nonhuman share that animates our every cell, becomes attentive to the relational field that opens the work to its intensive outside, its own nonhuman share. This relational field must not be spatially understood. It is an intensive mattering, an absolute moving that inhabits the work durationally. It is the art of time making itself felt. A fielding of difference has been activated and this must be tended. The art of participation involves creating the conditions for this tending to take place. This tending is first and foremost a tending of the fragile environment of duration generated by the working of the work. A tending of the work’s incipient rhythms. I say fragile because there is so much to be felt in the process of a work’s coming to resonance with a world itself in formation. Take the example of a busy day within Stitching Time. When we turned our attention to the flows of interactivity within the space in order to better understand how to stop the process of the sculptural disintegration, we found that the movements of participants were contagious. One person playing with the fabric sculpture was an invitation for twenty-five more to begin dismantling it. By attending to the modulation of the field of participation, we also noticed how little it took for the quality of duration to be radically altered in a way that inflected the affective tonality of collective engagement. How to compose with this contagious energy, given that there was also, as mentioned at the beginning, a physically participatory aspect to the piece that involved making garments? Throughout the installation, twenty-five baskets hung, each of them full with pieces of the larger Folds to Infinity collection (Figure 3.4). In the first few days, assuming that activation would be a challenge (based on early experiences where it was quite challenging to convince participants to engage with the fabric), we told all the participants entering the space that they were welcome to take fabric from the baskets to make a garment of their choice. Since to make a garment was a challenging proposition that took time—this despite the many buttonholes, buttons, and magnets in the Folds to Infinity collection—we assumed that a minority of people would stay long enough to fashion a garment. The unspoken proposition
was that, were they to stay and compose a piece that they liked, the garment would be theirs to take home afterward, moving nodes of time from the installation into the wider world. Our implicit invitation was to assist the few who took up our proposition, and to facilitate their process of making time by offering them tea and conversation. In the first week, we had many lovely encounters with people who stayed for cups of tea and made beautiful, carefully crafted garments. But it quickly also became apparent that there was a problem with the conditions we were setting for the space. As outlined previously, counter to our expectations, the problem was not with a lack of activation. Not only were people continuously seeking to rebuild the sculptural installation, they were increasingly moving to the baskets in a frenzy that undermined the very process Stitching Time was trying to create. Garments were put together hastily only to be left on the floor in a flurry of moving-on. Should we see this as part of the process and try to tune to the experience of taking and making time?
Erin Manning
It took a few weeks to work out, but it eventually became clear that the issue was that we ourselves were honing the experience of stitching time to the object itself. Despite our best intentions, we were directing the perceptual and expressive field of the work toward garment-making (via the baskets), thus preempting a more complex durational engagement with the space. In a work that was designed to give time by creating an ethos of shared time, we were facilitating an instrumental approach reminiscent of the very interactive model we were trying to resist. Of course, as with any environment, there were overlapping ecologies. Even when the space was at its most chaotic, people still wandered the space, engaging it on its more subtle levels, but the waves of frenetic movement were overpowering. Ten or fifteen people would be quietly wandering around, or would be sitting at the long sewing table having a cup of tea when suddenly one group would come in and reach for the baskets. Within minutes the space would be frantic with a wave of “doing” that had much less to do with taking time than it had to do with parents trying to “occupy” or “entertain” their children (make outfits, take photo graphs, move onto next artwork). The issue was not simply that the participants weren’t acting the way I had hoped, but that we hadn’t found the best conditions to facilitate a surfacing of experience that could exceed habitual responses. That the kids dressed themselves (or that the parents dressed them) was not the problem. More so was that the parents assumed that the space did not cater to them and treated it instead as a kind of day care or play space, allowing the kids to run around, cut or draw on the fabric, undo the sculpture or pull down the baskets, leaving strewn fabric on the ground in their wake. The space thus despite itself became a passive receptor for modes of engagement that seemed to us less about its own conditions of emergence than conditions rehearsed in other settings. Habitual responses run deep. This is the force of intuition, that it is capable of building on the habitual to find the nugget of a question that opens a problem to an aspect of itself as yet untouched or unthought. Or that it finds within habitual networks an opening as yet unseen. But intuition is honed, and does not come in a frenzy. Nor does technicity. What comes in a frenzy is the habitual, and
with the habitual comes its well-rehearsed techniques. For this is the force of technique, that it builds so well, and so consistently on habit. A large open space inspires running in children. A carpet inspires rolling, or resting, or feeling at home. And fabric inspires touching. All of these tendencies are absolutely necessary to Stitching Time, but the hope is for them to be activated in excess of the habitual. What art seeks to do, it seems to me, is modulate the habit. Work with it to tweak it. We want movement, but not simply one velocity (running). We want a sense of feeling at home, but with an ethos of care for the collective environment. We want touching but not simply for taking, for “getting something done”— but also for reaching-toward in an ethos of time shared. Paradoxically, perhaps what an immersive participatory environment such as Stitching Time makes felt is the force of the habitual and its capacity to undermine the art of participation. And if this is the case, it is likely to some extent due to how art events are increasingly formulated for the public. In the case of the Sydney Biennale, this was certainly true, particularly of the works on Cockatoo Island—the “non” museum venue. Here, the exhibition was billed as a “family event” where “kids will be able to discover the artworks and history of the Island through fun, creative activi ties [giving] children and their families a chance to get creative and unleash their inner artist through hands-on art making activi ties, as well as participation in family-friendly tours.”33 How could this not activate the very habitual response associated with kid- oriented entertainment venues? The problem is not with entertainment, but with the dispelling of the modulating field of experience art can create. For children are often the most likely among us to touch on the intuitive threads unarticulated in a process. They recognize artfulness, and given the chance, are some of the keenest perceivers of art’s differential. The problem is not with them but with the rhetoric, and with the expectation that every event demands the same habitual response. Often, “fun” is linked to the quick, the easy, the immediate. In the most frenzied moments of Stitching Time, I too often heard a parent exclaiming “Isn’t this fun!” to a distracted child. Fun is not what art can do, it seems to me. Fun is too easily linked to habit, to an experience that already has its contour and expectation. What
Erin Manning
art can do is quite different: It demands the invention of a tonality, a feeling-tone that differs from the habitual. Without this, the artful is sidelined. The irony is that children have an instinct for the artful—just watch them engage with the dance of attention of an ants’ nest or with the movements of clouds. Among children left to explore the space quietly without the added excitement of “fun,” this tendency is clearly apparent. Children who were invited to move slowly and discover the space tended to be some of our most patient visitors, and some of the most creative (Figure 3.5). These children who took the time to invent were much less quick to make decisions in advance about what the environment offered. It is the rhetoric of fun, not them, that leads toward the simplification of art’s process, putting art in the field of entertainment, thereby placing it right back within a capitalist ethos that is much less about taking or giving time than “getting something out of it.” There is nothing to be “gotten out of art.” It’s all in the making, in the giving. This is not to say that art can’t be entertaining. It is to emphasize its nuance. No nuance can be perceived in the frenetic movements of “getting it” or “doing it.” Nuance is not felt through rapid consumption (and “fun” is a mode of consumption). Nuance is perceived when the event of time is felt, when the art of time becomes the art of participation. It is perceived when a work resonates beyond the easily recognizable habits it engenders at first glance. The art of making time, and the art of taking time—both of these are about continual modulation, modulation not simply on the part of the artist, but also on the part of the participants in the ecology the work occasions. Every work is metastable and requires continual tending to recalibrate it within the complex durational fields it calls forth. A painting badly hung within a museum can fall flat, as can a video piece set up in a way that does not facilitate its taking place within the wider constellation of the exhibition. The question of modulation is vital to the art of participation. This is where the event-space of the art of time begins to compose with the notion of a choreographic object. The choreographic object is a lure for the activation of event-time toward the genera tion of complex forms of collective engagement. An exhibition choreographed with a sense of collective mobility makes felt the
force of time by facilitating intuitive linkages not only between works but within the relational field itself. When this happens, art becomes less an object than a conduit or vector toward a seeing- with, a sympathizing-across, a moving-through. Emphasizing choreography in this way also serves to remind us that there is no event that isn’t in some regard about movement, be it a sculpture or a notebook, an immersive installation or a sound piece. The world moves us, and how it moves us matters. For how it moves us makes felt the vector quality of its conditions of entry, makes felt the way it creates and modulates space-times in the making, makes felt the way it activates the differential not only within its own canvas but across works and in the complex ecologies they call forth. Think of moving through an installation. Note how the red of a flower exquisitely cut in wood in one space resonates with the touch of light that surfaces in a video installation in an adjacent space. Note how this quality affects the entry into another environment, and how it tunes to a tonality already at play—a lightness, an airiness. These are sympathetic attunements to a generative environment. They activate its moreness by responding to it. And, by doing so,
Erin Manning
they make the intuition at the heart of the work palpable not only to themselves, but to the field of relation in its choreographic plenitude: they make felt how the work is capable of fielding the myriad problems that continuously relaunch it. This is why Bergson speaks of sympathy as that which reveals the nonhuman in us. Sympathy makes felt how the tendency, the way, the direction or incipient mobility, is itself the subject of the work. It makes tending the subject, undermining the notion that either the work or the human come to experience fully-formed. Sympathy: that which brings the force of the more-than to the surface. That which makes felt how the force of experience always exceeds the object.
Vectors Despite my focus on human participation here, the art of participation does not find its conduit solely in the human. Art also does its work without human intervention, activating fields of relation that are environmental or ecological in scales of intermixings that may include the human but don’t depend on it. How to categorize as human or nonhuman the exuberance of an effect of light, the way the air moves through a space, or the way one artwork catches another in its movement of thought. This is surely the force of curation: its choreographic capacity to bring to life the lingering nonhuman tendencies that bridge fields activated by distinct artistic processes.34 Artfulness is always more than human. Whitehead’s notion of vectors is useful to get a stronger sense of the more than human quality of experience artfulness holds. The vector, in Whitehead’s usage, prolongs the use of it in physics as a force of movement that travels from one occasion of experience to another or within a single occasion. What is particular to Whitehead’s definition, however, is the way he connects the vector to feeling. “Feelings are ‘vectors’; for they feel what is there and transform it into what is here.”35 For Whitehead, feelings are not associated to a preexisting subject. They are the force of the event as it expresses itself. Understood as vectors, feelings have the force of a momentum, an intuition for direction. They are how the event expresses sympathy for its own intuitive becoming.
Whitehead’s theory of feeling catapults the notion of human- based participation on its head. What the feeling has felt is how the event has come to expression. The subject is its aftermath. Subject here is not limited to the human—it is the marker of that which can be located as the culmination of an occasion of experience. An occasion of experience always holds such a marker—once the occasion has come to concrescence, it will always be what it was. This is what Whitehead calls the “superject,” or the subject of experience. Making the subject the outcome of the event rather than its initiator reminds us that the subject of an event includes its vector quality—in Massumi’s terms, its thinking-feeling. The subject can never be abstracted or separated out from the vector quality, the “feeling-tone” that co-composed it. The artful is the event’s capacity to foreground the feeling-tone of the occasion such that it generates an affective tonality that permeates more than this singular occasion. For this to happen, there has to be, within the evolution of an occasion, the capacity for the occasion to become a nexus that continues to have an appetite for its process. This leads us back to contagion. What is complex about conta gions is that they are always multiple. A frenetic contagion of garment-making can coexist with the quietness of an experience of red turning to yellow. These are independent events occurring in a wider event-space. They are contemporary but dissociated. And yet, when the conditions allow it, they can share a feeling-tone. This vector that can activate across contemporarily independent occasions fashions a wider tonality that can begin to modulate the time of the event as a whole. Activated by vectors that cross and thus co-compose, the event has created its own mobility, its own ambiance, and in so doing, it has begun to run itself: a mobile architecture. This is a transient quality, of course, continuously cut across by captures that reinstate the necessity of harnessing the now—someone steals buttons, or someone’s carelessness threatens to break the tea set. Feeling-tones are not about ease, or about comfort. They are about the force of an individuation that momentarily refuses its own capture. As feeling-tone, vectors attune to the field of relation in its becoming-event, and tune also to its more-than. This allows a
Erin Manning
becoming-event to participate in a becoming-society, a becoming of a wider field of relation that outdoes the atomicity of its initial coming into being. As Whitehead underscores, this is a rhythmic (and not a linear) process. It swings from the in-itselfness of a given actual occasion, where what is fashioned is simply what it is, to a wider field where the openness to fashioning remains rife with potential not only in the work at hand but across a wider expanse of artfulness. Nature does this well. Whitehead talks about this in terms of the creation of worlds— “feeling from a beyond which is determinate and pointing to a beyond which is to be determined.”36 To be determined here is resolutely to be in potentia—for how a feeling-tone vectorizes cannot be mapped in advance, and whether it lands in a way that activates a worlding cannot be predicted. But it can be modulated, and this is the art of participation.
Contemplation A feeling-vector contemplates its passage, attending to the dance of an occasion coming into itself. The occasion cannot be abstracted from its feeling-tone. The contemplation of its becoming cannot be separated out from how it comes into itself. Placing contemplation, intuition, feeling-tone, and sympathy together, what emerges is an artfulness that refuses to be instrumental. There is no use-value—it does nothing that can be mapped onto a process already underway. It has no endpoint, no preordained limits, no moral codes. But it is conditioned and conditioning. To say that a process is conditioning is to say that it is born of enabling constraints that facilitate the most propitious engagement with the problem at hand, enabling the passage toward a field that yields. It does its work—it stands up—when this yield was already present in germ in the initial problem that activated the process, in the intuition that tapped into how technique can become technicity. Without conditions, the aesthetic does not yield, and the work cannot become in excess of the techniques that brought it into being. Conditions facilitate contemplation. Contemplation, understood as the act of lingering-with, of tending-to a process in its outdoing
of technique, is not about a doing, but neither is it simply a passive modality. It is active and activating in the sense that it attends. It attends to the conditions of the work. And it attends to the way the work attends to its conditions. Contemplation is passive only in the sense that this attending provokes a waiting, a stilling, a listening, a sympathy-with that exceeds not only the work as it presents itself, but any notion of the subject as preexistent. Contemplation, like intuition and its counterpart, sympathy, activates the differential of a work. It reveals not how the subject (the artist, the participant) feels about a work, or what a subject sees in the work, but how the field of experience a work creates itself attends to a certain singular mode of attention. Contemplation cannot be separated out from the life of the work. Contemplation makes the artful felt. Though it may sound spiritual or otherworldly, artfulness is anything but. It is with and in the world, in the uneasy balance between dancing and thinking, leaping and writing Zarathustra strives toward. Here, in the midst of life-living, artfulness reminds us that the “I” is not where life begins, and the “you” is not what makes it art. Made up as it is of a thousand contemplations, it reminds us instead that “we speak of the self only in virtue of these thousands of little witnesses which contemplate within us: it is always a third party who says ‘me.’ ”37 This is why its occurrence is so rare. For artfulness depends on so many propitious conditions, so many tendings, so many contemplations, so many implicit linkages between intuition and sym pathy. And more than all else, it depends on the human getting out of the way of a process underway that exceeds us, allowing art to do the work it can do within an ecology of practices that, while often directed by us, does not find its resting place solely in the world of the human. It is in this vista of the more-than that we find pockets of artfulness. There is the artfulness of nature, or of the animal.38 Pockets of artfulness are nodules of time caught in the sound of a bee (Alex Finlay) or in the weight of unspun wool blowing color in the wind (Cecilia Vicuna). They are the ineffable force of an assault both homely and otherworldly occasioned by Ria Verhaege’s Living with Cuddles (Figure 3.6). They are the wonder of Ed Pien’s paperworlds or the sound of the chimes in a roofless building on
Erin Manning
Figure 3.6. Living with Cuddles (2012), Ria Verhaege, at 18th Biennale of Sydney (2012). Photograph courtesy of Ria Verhaege.
an island (Tiffany Singh). They are the energy of the line in Emily Kngwarreye, and the intimate and slow movement of the human dwarfed by the icebreaker in Guido Van der Werve’s Nummer Acht: Everything is going to be alright. Artfulness: the way the art of time makes itself felt, how it lands, and how it always exceeds its landing.
Notes 1. Today, the word for art in German is Kunst, and die Art is used only to convey the sense of manner or way described in the text. In French, English, and most Romance and Indo-Germanic languages, the two meanings have become separated, though the sense of “the art of” still seems to carry that earlier definition. 2. Gilles Deleuze, Bergsonism, trans. Hugh Tomlinson and Barbara Habberjam (New York: Zone Books, 1991), 33.
3. I have developed relational movement as a concept in both Politics of Touch: Sense, Movement, Sovereignty (Minneapolis: University of Minnesota Press, 2007) and Relationscapes: Movement, Art, Philosophy (Cambridge, Mass.: MIT Press, 2009). 4. Deleuze, Bergsonism, 41. 5. Ibid., 74. 6. David Lapoujade, Puissances du temps. Versions de Bergson (Paris: Editions Minuit, 2010), 9. My translations throughout. 7. Ibid., 12. 8. Ibid., 21–22. 9. Ibid., 24. 10. Ibid., 53. 11. Ibid., 56. 12. Gilles Deleuze, Proust and Signs, trans. Richard Howard (New York: George Braziller, 1972), 13. 13. Ibid., 47. 14. Ibid., 48, 61. 15. Lapoujade, Puissances du temps, 62. 16. Michael Taussig, What Color Is the Sacred? (Chicago: University of Chicago Press, 2009). 17. Gilles Deleuze, Nietzsche and Philosophy, trans. Hugh Tomlinson (New York: Columbia University Press, 1983), 101. 18. Taussig, What Color, 47. 19. Friedrich Nietzsche, Philosophy in the Tragic Age of the Greeks, trans. Marianne Cowan, Gateway ed. (Washington, D.C.: Regnery, 1962; repr., 1996), 3. 20. I have explored in detail the concept of the choreographic object both in “Propositions for the Verge: William Forsythe’s Choreographic Objects” in Inflexions 2 (January 2009), www.inflexions.org, and in Always More Than One: Individuation’s Dance (Durham, N.C.: Duke University Press, 2012). 21. I discuss event-time in more detail in Always More Than One. 22. The SenseLab is conceived as a “laboratory for thought in motion.” With the collaboration of local and international philosophers, artists, and community activists, we have organized research-creation events in our Technologies of Lived Abstraction Event Series, which has also spawned a book series at MIT Press by the same name and the journal Inflexions: A Journal for Research-Creation, www.inflexions.org. 23. See, for instance, “Fiery, Luminous, Scary” in Always More Than One, 124–31. An earlier version of the same text is also available as “Fiery, Luminous, Scary” in SubStance 40, no. 3 (2011): 41–48. 24. Brian Massumi and I discuss the concept of enabling constraints
Erin Manning
in Thought in the Act: Passages in the Ecology of Experience (Minneapolis: University of Minnesota Press, 2014). This is a concept developed with the SenseLab. It has also been explored in the writings of SenseLab members Christoph Brunner and Troy Rhoades. 25. I discuss William Forsythe’s One Flat Thing, reproduced as an instance of a mobile architecture and explore the possibility of a choreographic object such as Folds to Infinity becoming a mobile architecture in “Choreography as Mobile Architecture,” Always More Than One, 99–123. 26. This effect is facilitated by the more than ninety spotlights in the space that merge with the color, emphasizing the nuances between tone and texture. 27. Samantha Spurr designed the netting to connect to parasitical structures made of metal that enable the hanging of the collection into sculptures. Because Folds to Infinity is magnetic, it requires such metallic structures to become sculpture. This parasitical use of the netting creates the impression that the fabric is floating in the air. The sculptures themselves are mostly covered with translucent fabric, which further emphasizes the quality of transparency, or seeing through color into light. Further, the fabric used for these purposes has an iridescent quality composed as it is with threads of contrasting colors—blue turning to purple, green turning to yellow, orange turning to red. This creates a shimmering effect that brings out its complementarity, contrasting it with the more opaque colors—the reds, oranges, and greens of the wider hanging installation. Our hope is that this creates a complexity in the field of color itself, whereby color is perceived less as an object (as a piece of fabric, or as a pigment) that as an effect of light. 28. See, for example, “Biennale of Sydney Caters for Families and Kids,” artshub, June 18, 2012, www.artshub.com.au/. 29. See Gilles Deleuze, Francis Bacon: The Logic of Sensation, trans. Daniel W. Smith (Minneapolis: University of Minnesota Press, 2003). 30. David Lapoujade, “Intuition et Sympathie chez Bergson,” Eidos 9 (2008): 11. My translations throughout. 31. Ibid., 12. 32. Ibid. 33. “Biennale of Sydney Caters for Families and Kids.” 34. The Sydney 2012 Biennale, titled All Our Relations, is a good example of this kind of curatorial practice. Catherine de Zegher and Gerald McMaster not only chose artists whose works complement one another, they also installed the exhibition in a way that created implicit connections between works. These connections are not simply content based or form reliant. They are connections at the level of resonance that make felt a field of relation that is itself artful.
35. Alfred North Whitehead, Process and Reality (New York: Free Press, 1978). 36. Ibid., 163. 37. Gilles Deleuze, Difference and Repetition, trans. Paul Patton (New York: Columbia University Press, 1978), 75. 38. See Brian Massumi, What Animals Teach Us about Politics (Durham, N.C.: Duke University Press, 2014).
This page intentionally left blank
9 4 0
The Aesthetics of Philosophical Carpentry Ian Bogost
This is a chapter about the practice of philosophy in the near future, based on a talk about the practice of philosophy in the near future. This is a chapter about the objects out of which philosophy is made as much as the objects of philosophy. Its form matters. It lives here now, in print, where once it did in speech—and in space: Shorewood, Milwaukee—and in time: early May 2012. It was invented for a room and now it finds itself on these pages instead. Some residue of its forcible relocation might be found, as is the case with most things wrest from their dwelling.
I. Enjoying This Chapter A note on enjoying this chapter: Imagine a plate of artisanal meats and cheeses. Imagine lardo, Sainte-Maure-de-Touraine, corni chons with spicy mustard, Shropshire blue, pickled cauliflower, and house-made blood sausage. The thing that makes them a plate is that they are all on a plate. No one sends an assiette de charcuterie back to the kitchen because it doesn’t make an argument.
II. The Things We Do The tarmac atop airport runways and aprons and taxiways isn’t really tar-penetration macadam. Instead of bound coal tar and ironworks slag, these days airports are coated with asphalt or concrete, like cupcakes are frosted with buttercream instead of confectioner’s sugar. 9 81 0
Ian Bogost
It’s a misunderstanding your airplane didn’t notice when its rubber wheels glanced against the concrete runway of Milwaukee’s Mitchell Airport, stretching its surface imperceptibly, like this week’s raindrops against the nylon of tents thrown near Pewaukee Lake twenty miles to the west. The flights cleared behind and in front of you were similarly oblivious, no less than the precipitation or the Goodyear Flight Radials. You weren’t listening, either, to the grooved concrete or the raindrops. You were sleeping in your leather seat or your dome tent, dreaming about Kopp’s Frozen Custard or reading Alfred North Whitehead or fretting over your presentation for The Nonhuman Turn conference. A group of philosophers in a lecture hall isn’t unlike a convoy of aircraft on approach. One lone Airbus A320 or associate professor performs—nose up, flaps down, throttle up, voice screeching, exhaust droning, before the rubber meets the road. The brakes engage, the show ends. Thank you very much. Meanwhile, onlookers stare agape, ears covered or plugged or otherwise impeded by droning noise pollution. Oh thank god, it’s over, they cheer silently before clutching in to proceed through the traffic light on the airport-adjacent frontage road, or while plunging a Moleskine notebook into a Tom Bihn bag. For some performances, multiple dancers tousle the venue all at once. Close-spaced double, triple, or parallel approach at Chicago O’Hare or Atlanta Hartsfield-Jackson or the Society for Cinema and Media Studies Conference. Increased organizational apparatus is key: instrument landing systems, interstitial coffee break apparatuses, microwave landing systems, multiple lecture rooms, staggered approaches, diagonal radar separation—aircraft and lecture isolation is key. Final monitor controllers are required to ensure that the NTZ (no-transgression zone) is not entered. Overstressed, underpaid air traffic controllers and graduate students ensure order: “Delta two-eight-seven-niner cleared to land runway two-three, traffic landing runway one-zero”; or, “Ah, Professor Bogost, your session is in the Oak Room. It’s just around the elevator, past the Gramsci display.” We do the things we do because they are the things we do, so
The Aesthetics of Philosophical Carpentry
we do them. We do them so as not to disturb the way things are done—traffic landing runway one-zero, after all. It is at least as unlikely that this artifact will become a conversation as it is that it will become an orgy. Presumably I will write—I was brought here to write—and you will read. Had your timing or finances been better, you would have listened, or at least performed a ritual that would have been mistaken for listening (or even for reading). Some, struck by pique, may put it down to shop for a K-Cup for Keurig coffee brewers. Others may hold their heads in their hands lamenting in the guise of concentrating. Let your mind wander. Eventually, inevitably, soon perhaps, but not soon enough, I will stop. Had we been face to face, live, polite ovation would follow. “We do have some time for questions,” someone would have said, perhaps even me. “Isn’t this just art?” someone might ask, or “Aren’t you committing a fatal Orientalism?” or “Interesting provocation, but I’m not sure I understand what you’re suggesting we do” or in any case “Not really a question, more of a comment.” Now that you’re reading this instead, you can tweet about it instead: “Really unusual book chapter by @ibogost” or even just “[email protected] is bonk ers.” Thank you for flying Nonhuman Turn Edited Collection. We know you have a choice when you read. We wish you a pleasant stay in object-oriented ontology or in whatever your final philosophy may be. Then coffee, or wine, or schnitzel, or blogging. What do we do when we do philosophy? That’s not a rhetorical question. I’m serious. Why am I writing at you now, instead of serving you custard? I’m not sure I can fully convey to you the degree to which I am freaked out about what a philosopher could do or ought to do. This is not an act.
III. The Nonhuman Return Milwaukee is the only town anywhere where someone might recognize my unusual surname, and that’s because the Bogost family settled here after emigrating from Russia in the nineteen-aughts, and indeed all the world’s Bogosts can be traced back to the city. My father and his two brothers grew up there during the Great Depression and World War II, attending North Division High
Ian Bogost
School before shipping off to Madison to study the newly fashionable discipline of psychology and psychiatry in the early 1960s. As my father would say, it takes one to know one. The Nonhuman Turn was the second scholarly event I attended at the University of Wisconsin-Milwaukee, and the second time I have come to Milwaukee for reasons other than visiting family. The first was hosted by Sandra Braman and Thomas Malaby in late April 2006, conveniently timed such that I was able to attend my grandmother’s ninety-fi fth birthday party that week as well. For years she lived right up the street from the university in Shorewood, near Oakland and Capitol. There wasn’t much to do during those visits. Sometimes we would walk down to Sendik’s for oranges or Benji’s Deli for corned beef, and if I were very lucky, someone would take me to Walgreens for bubblegum or Garbage Pail Kids. Eventually we would drive up to Sister Bay, where there wasn’t much to do either. Oranges would be replaced by cherries, corned beef by smoked whitefish. My grandfather ate the heads off of them. That trip in 2006 was the last time I saw her, at Jack Pandl’s Whitefish Bay Inn, which first opened when she was four years old. My daughter, just barely four herself at the time, panicked as dusk settled into night over North Lake Drive. “I have to go to my hotel!” she panicked, pointing outside desperately. “There are stars!” I last came almost exactly a year before The Nonhuman Turn conference for my grandmother’s funeral—she lived less than a month past her one-hundredth birthday, just long enough so the rest of us could say she did. By happenstance, Hertz rented me a big Buick, as if they knew I would need to camouflage among the elderly. I’m not close to my extended family, and I’ll admit that I was horrified that many of them were staying in the hotel I had chosen solely for Hilton HHonors elite tier status credits. Stuart Moulthrop picked me up and we ate schwinebraten and drank Franziskaner at Mader’s downtown. We foolishly overate and had no room for frozen custard, even though Kopp’s was right down North Port Washington Road from the Hilton Milwaukee River. A trip to Milwaukee is not complete without somehow failing to make it to Kopp’s.
The Aesthetics of Philosophical Carpentry
I’m also not very sentimental, but so much of object-oriented ontology is, to me, a reclamation of a sense of wonder often lost in childhood, that coming to Milwaukee as an adult, a professor, perhaps a philosopher even, makes me think of the rhubarb that would grow in the summer behind the house on Marion Street, or the milk delivery door at its rear, or the tea samovar on the shelf in the dining room. It didn’t used to be so strange to be interested in things, but somehow it became so. Maybe this nonhuman turn is really a return, for you as much as for me. The things were always here, waiting. The rhubarb doesn’t care about actual occasions or Antonio Negri.
IV. Carpentry, Part 1 In my recent book Alien Phenomenology, I advance an idea I call carpentry. It’s a theory of philosophical productivity. I found myself at the Nonhuman Turn conference performing the very act I critique in that chapter. When philosophers and critics gather together, I wrote in the book, they commit their work to writing, often reading esoteric and inscrutable prose aloud before an audience struggling to follow, heads in hands—that’s why you’re excused for shopping for coffee pods. Ideas become professionally valid only if written down. And when published, they are printed and bound not to be read but merely to have been written. Written scholarship is becoming increasingly inaccessible even to scholars, and publication therefore serves as professional endorsement rather than as a process by which works are made public. The scholar’s obsession with writing creates numerous problems, but two in particular deserve attention and redress. First, academics aren’t even good writers. Our tendency toward obfuscation, disconnection, jargon, and overall incomprehensibility is legendary. As the novelist James Wood puts it in his review of The Oxford English Literary History, The very thing that most matters to writers, the first question they ask of a work—is it any good?—is often largely irrelevant to university teachers. Writers are intensely interested in what might be called aesthetic success: they have
Ian Bogost
to be, because in order to create something successful one must learn about other people’s successful creations. To the academy, much of this value-chat looks like, and can indeed be, mere impressionism.1 The perturbed prose so common to philosophers, critical theorists, and literary critics offers itself up as an easy target, but it’s not alone. Many scholars write poorly just to ape their heroes, thinkers whose thought evolved throughout the linguistic turn. In any case, most of us don’t write for the sake of writing, despite simultaneously insisting that literature is somehow more naturally sacrosanct than painting video games or The Real Housewives of Waukesha. Second, writing is dangerous for philosophy. It’s not because writing breaks from its origins as Plato would have it, but because writing is only one form of being. The long-standing assumption that we relate to the world only through language is a particularly fetid, if still bafflingly popular, opinion. But so long as we pay attention only to language, we underwrite our ignorance of everything else. Levi Bryant puts it this way: If it is the signifier that falls into the marked space of your distinction, you’ll only ever be able to talk about talk and indicate signs and signifiers. The differences made by light bulbs, fiber optic cables, climate change, and cane toads will be invisible to you and you’ll be awash in texts, believing that these things exhaust the really real.2 Bryant suggests that our work need not exclude signs, narrative, and discourse, but that we ought also to approach the nonsemiotic world “on its own terms as best we can.” When we spend all of our time reading and writing words—or plotting to do so—we miss opportunities to visit the great outdoors.
V. Cows, Part 1 Recently, there have been numerous rejoinders against the armchair cogitations of traditional philosophy. One such trend has been dubbed “experimental philosophy,” and it looks a lot like
The Aesthetics of Philosophical Carpentry
cognitive psychology. Philosophers like Kwame Appiah and Joshua Knobe observe participants, collect data, run cognitive experiments, and attempt to draw conclusions from their results, which primarily address issues of ethics, thought, and belief. Even more recently, philosophers like Robert Frodeman have advanced a position called “field philosophy.” Borrowing the modi fier from “field science,” such philosophers not only abandon the wood-lined enclaves of their offices and libraries, but also eschew the public square of experimental philosophy. Instead they begin “in the world,” according to Frodeman, “drawing out specific, underappreciated, philosophic dimensions of societal problems.” Field philosophy “integrates ethics and values concerns with the ongoing work of scientists and engineers.”3 Experimental and field philosophies have their detractors. Some accuse these efforts of mere instrumentalism, of turning philosophy itself into standing-reserve, or of selling out philosophy in a barely veiled support of the neoliberal interests of global capital. But Frodeman, Knobe, and others respond that philosophy ought to serve the world, not just the mind, and not just the academy. Furthermore, they argue, given the current state of things, disciplines like philosophy must adapt and renew themselves to respond to changing times, both in the university and in the world. I’m less bothered by the purported prostitution of philosophy as I am disappointed that such approaches prove so limited in their ambition. This isn’t philosophy, really, but merely ethics, an area that, as it happens, has long filled classes and books thanks to required accreditation requirements in engineering. And ethics is still a field of human interest, an amplification of the same human-world correlate that systematically omits all other beings from philosophical consideration. I could suggest object-oriented philosophy as an alternative, one that embraces the same orientation toward the world, but without the human-centered instrumentalism of experimental and field philosophy. Would it really be so daft to admit that the world is simply full of interesting, curious things, all living their own alien lives, bumping and jostling about, engulfing and destroying one other, every one of them as secretive and withdrawn as any other? If field philosophy just means driving our cars past the cows to
Ian Bogost
the industrial farms and then going back home to write up ethics white papers, why bother leaving the office? Cows would make better field philosophers than philosophers would, since at least they work in fields. What if instead the field were the philosophy, not just the place where the philosopher goes to stroke his chin and scuff his wingtips before returning to an iPad in a rental Mazda and a MacBook in a Holiday Inn Express and, two flights later, an office door left just open enough to discourage anyone from entering. I am here, but not for you.
VI. Carpentry, Part 2 In Alien Phenomenology, I outline two versions of carpentry, which I might now call general and special carpentry, even though I don’t use those names in the book. General carpentry extends the ordinary sense of woodcraft to any material whatsoever—to do carpentry is to make things as philosophy, but to make them in earnest, with one’s own hands, like a cabinetmaker. Special carpentry takes up a philosophical position more directly connected to the practice of alien phenomenology, that of speculating about the experience of things. Into the general sense of carpentry, this sense folds Graham Harman’s idea of “the carpentry of things,” an idea he borrowed in turn from Alphonso Lingis. Both Lingis and Harman use that phrase to refer to how things fashion one another and the world at large. Special carpentry entails making things that explain how things make their world. In the book I offer several examples largely from the domain of computing, where I do most of my work. These include the Latour Litanizer, which fashions lists of objects in the style of Bruno Latour, a phenomenon I name “Latour Litanies,” and I Am TIA, a device that approximates the perceptual experience of the custom graphics and sound chip in the 1977 Atari Video Computer System. The first exemplifies ontography, a practice of philosophical enumeration central to my version of object-oriented ontology; the second carries out metaphorism, a speculative process of characterizing object experience through metaphor. Carpentry might offer a more rigorous kind of philosophical
The Aesthetics of Philosophical Carpentry
creativity, precisely because it rejects the correlationist agenda by definition, refusing to address only the human reader’s ability to pass eyeballs over words and intellect over notions they contain. Sure, written matter is subject to the material constraints of the page, the printing press, the publishing company, and related matters, but those factors exert minimal force on the content of a written philosophy. Although a few exceptions exist (Jacques Derrida’s Glas, perhaps, or the Nietzschean aphorism, or the propositional structure of Baruch Spinoza’s Ethics or Ludwig Wittgenstein’s Tractatus), philosophical works generally do not perpetrate their philosophical positions through their style or their form. The carpenter, by contrast, must contend with the material resistance of his or her chosen form, making the object itself become the philosophy. This is aesthetics as first and last philosophy.
VII. Cows, Part 2 The Kopp’s Frozen Custard’s website offers a flavor forecast. On Thursday May 3, 2012, the first day of the Nonhuman Turn conference, Mint Chip and Chocolate Thunder poured from metal spouts as heavy rain poured from the Milwaukee sky. One can’t help but wonder if the custard mirrors the weather, or the weather the custard. According to the forecast for Saturday, May 5—the day I once spoke many of these words instead of you reading them— those fortunate to enjoy custards instead of philosophy indulged in Dulce de Leche and Chocolate Peanut Butter Chocolate. The mere existence of “Wisconsin-style frozen custard” should be enough to make us all stop reading Hegel, but alas, we shall not do so. I’m not sure making custard is less noble or less philosophical than making philosophy, where philosophy means words written down on paper, typeset, and glued to bindings or distributed over Whispernet instead of pasteurized egg yolk and milk fat drawn through a refrigerated hopper with low air overrun. Is the dense and creamy mouthfeel of custard less rigorous than an abstruse and elaborate system of political, ethical, or ontological thought? I have to admit, I’d rather eat custard. Not cultural studies, but custard studies. As it happens, I’m an accidental bovine philosopher, so I have a head start.
Ian Bogost
At the 2010 Game Developers Conference, a schism seemed to erupt between “traditional” game developers, who make the sorts of console and casual games we’ve come to know well, and “social” game developers, who make games for Facebook and other networks. It was a storm that had been brewing for a few years, but the massive success of Zynga’s FarmVille along with the company’s publicly malicious attitude had made even the most apathetic of game developers suddenly keen to defend their craft as art. In July of that year, my colleague Jesper Juul invited me to take part in a game theory seminar he runs at NYU, which he provoca tively titled “Social Games on Trial.” Researcher and social game developer Aki Järvinen would defend social games, and I was to speak against them. As I prepared for the NYU seminar, I realized that theory alone might not help clarify social games—for me or for anyone in attendance. It’s nice to think that “theorist/practitioners” like myself and Aki can translate lessons from research to design and back like adept jugglers, but things are far messier, as usual. The dialectic between theory and practice often collapses into a call-and-response panegyric. This in mind, I thought it might be productive to make an example that would act as its own theory—a kind of carpentry. In the case of social games, I reasoned that enacting the principles of my concerns might help me clarify them and, furthermore, to question them. So I decided to make a game that would attempt to distill the social game genre down to its essence. The result was Cow Clicker, a Facebook game about Facebook games that completely consumed my life for well over a year and a half. There was a picture of a cow, which players could click every six hours. Each time they did, they received one point, called a click. Players could invite friends to join their “pasture”; when any of one’s pasture- mates clicked, the player would receive a click, too. Players could purchase in-game currency, called “mooney,” which they could use to buy more cows or to skip the click timer. There was more—much more, embarrassingly more—but that’s enough to get us started. By sheer reach Cow Clicker is easily the most successful work I have ever produced. More than fifty thousand people played it, and in most circles I am now most easily introduced as “the Cow Clicker guy.” I was hoist on my own petard, as compulsively obsessed with
The Aesthetics of Philosophical Carpentry
running my stupid game as were the players who were playing it. I added cow gifting; an iPhone app (Cow Clicker Moobile); a children’s game (My First Cow Clicker); a “cowclickification” API; and cow clicktivism, which allowed players to click virtual cows to send real cows to the third world via Oxfam Unwrapped. When it came time to end it, I launched a bovine alternate reality game played on four continents that revealed the coming Cowpocalypse—Cow Clicker’s rapture. I have spent more time making cows than reading Alfred North Whitehead.
VIII. Carpentry, Part 3 What does it feel like to make custard or cows? Making something from the ground up, participating in every process; avoiding abstraction. It is handicraft. The craftsman asks, What is involved in the creation or genesis of a thing? We philosophers consider this act only in the most cursory way. Forget the custard and the pastures for a moment, just consider once more the thing scholars usually make when we deign to make things: books. Several years ago Eugene Thacker gave me a slim book in the hallway between our offices at Georgia Tech. When I asked him what it was, he said, It’s a prototype. He and Alex Galloway were writing The Exploit, their weird book about network culture. The two had uploaded their work in progress to the print-on-demand site Lulu and run a few perfect bound prototypes of their book-in- progress. That’s what Eugene meant by a prototype: not (just) a first go of the ideas, but also of the form of the object. It wasn’t perfect. Lulu and others trim beyond standard sizes, the paper isn’t of offset quality or weight, the layout isn’t typeset and endnoted professionally, and so forth. It could be, of course. I’ve published books as an author and as a publisher, and the process is straightforward enough. Getting a hard proof for the first time is liberating and weird, even though it’s so simple. A book, not a collection of words. A book that opens and dog-ears and fits in a bag or under the short leg of a wobbly table. A book that can kill a fly or be subject to marginalia conductivity tests. Compare Thacker and Galloway’s experience to the normal publishing process. Back when his book The Textual Life of Airports
Ian Bogost
was published in December 2011, Christopher Schaberg reported what most authors do: seeing his book for the first time. “What a weird feeling,” Chris wrote on his blog. “It resembles an object from outer space. Vaguely recognizable, yet totally alien at the same time.” This is the experience of most authors. We say we write books, but really we write words. Then we put them in a FedEx box and give them to a publisher, who performs a ritual upon them that eventually spits out a book. Writers make words, and then they sign over rights to make books to book-makers. It’s not always a bad thing. The book Continuum Publishing made for The Textual Life of Airports is well designed and attractive, delightful to hold, a nice size, laid out well. It’s also one hundred dollars in a hardback-only edition, which means that no normal human beings will buy it until Continuum gets around to publishing a paperback edition. All of which just underscores the point: authors rarely make books, where a book is an object with certain properties meant for the lives of readers. The publishing industry is not the problem, either. Sure, self-publishing puts more apparent control in the hands of an author, but the reality of print- on-demand (POD) printing and eBooks is one of far less design control than was ever possible in offset printing. Books can be designed. POD books just get uploaded and pressed out. They are the lunchmeats of publishing. Shortly before The Textual Life of Airports dropped, Chris and his Loyola colleague Mark Yakich put together another book, Checking In / Checking Out. It’s a two-sided book about airports and airplanes, one written by each author. The book resembles a passport in size and shape and even texture, and the effect just makes you want to carry the thing with you when you travel. Which, of course, is the perfect time to read it. As the Los Angeles Times put it, “About 5 by 6 inches, small enough to tuck into a jacket pocket or a purse, it’s easy to carry, doesn’t take too long to read, and is quite nice to look at. And if you carry it on a plane, you don’t have to turn it off.”4 To produce that effect, Schaberg and Yakich had to write, design, print, market, and distribute a book—a real object in the world. Not just a series of words on pages sent to a publisher. I’ve certainly found myself thinking more and more about this
The Aesthetics of Philosophical Carpentry
over the years (and asking more and more of my publishers). Both Racing the Beam and Newsgames are books whose physical size, heft, and feel very much please me. They are readable and attractive and desirable as objects. Likewise, How to Do Things with Video games was made with a particular experience in mind: short chapters, small form factor, inexpensive paperback edition from day one, and so forth. A book people read and finish. And I hope enjoy having and experiencing as much as they enjoy reading it. I put laborious effort into the creation of A Slow Year, which is a book despite also being a video game. Experiences like these have made me realize that books are not just boxes for ideas. They are also just boxes, like cereal is just boxes. There is a chasm between academic writing (writing to have written) and authorship (writing to have produced something worth reading). But there’s another aspect to being an author, one that goes beyond writing at all: book-making. Creating the object that is a book that will have a role in someone’s life—in their hands or their purses, wrapped around their mail, in between their fingers.
IX. Materials But beyond books, what approaches do we have? One would involve embracing the materiality of different media for their own sake, rather than insisting that we make appeals to writing and speech as the singular and definitive models for intellectual productivity. For some time now I have been arguing for the use of models, particularly computational models, to make arguments. For example, I’ve written several times about La Mollein dustria’s The McDonald’s Videogame, a scathing critique of the multinational fast-food industry. The game demonstrates the abject corruption required to maintain profitability and manageability of a large global food company. It’s a good example of what I’ve previously called procedural rhetoric, arguments fashioned from models. In the game, players control fields in South America where cattle are raised and soy is grown; a factory farm where cows are fed, injected with hormones, and controlled for disease; a restaurant
Ian Bogost
where workers have to be hired and managed; and a corporate office where advertising campaigns and board members set corporate policy. As play progresses, costs quickly outstrip revenue, and the player must take advantage of more seedy business practices. These include razing rainforests to expand crops, mixing waste as filler in the cow feed, censuring or firing unruly employees, and corrupting government officials to minimize public outcry against these actions. But many players—especially those who are technically minded and enjoy mastering their video games—find themselves lamenting the difficult job of McDonald’s executives rather than being incensed by their corrupt corporate policies. I’ve had a number of students make this observation about the game, in fact. “I empathized with the CEO of a big company. They have it rough.” When Molleindustria released a similar game, Oiligarchy, about the global petroleum market, they seem to have recognized this failing, if that’s the right word for it. In response, they posted a “postmortem” with text and images that explain the premise of the game: peak oil, supply and demand, imperialism, and so forth. An obvious question, then: If the game is incapable of or insufficient to do this, if the traditional media of text and image are necessary or even better as explanations, then why are we making games? It risks becoming a purely aesthetic exercise, a kind of accessory. A bag of peanuts to go along with the “real” media of language. Whether through convention or reception, the growth of form is short-circuited. Language reasserts itself. Procedural representation is not proven intrinsically ineffective, but subordinated to the media ecosystem in which it serves as unrealized underdog. Consider the orrery. It’s a mechanical device that illustrates the position and motion of planets and moons in the heliocentric model of the universe. It’s named for the Earl of Orrery, to whom the first such example was presented in 1704. It would be possible to write or draw such an explanation of object interactions, for example, in a textbook or on a classroom poster. But no such explanation would disrupt or undermine the orrery as object, as craftwork. As a physical model, as a procedural argument. No one
The Aesthetics of Philosophical Carpentry
would make an orrery and slink off apologetically to write a pamphlet to take its place, like La Molleindustria did for Oiligarchy. “I’m sorry, I didn’t mean to make a model, please take these words instead.” Contrast the orrery with page 87 of Jared Diamond’s Pulitzer Prize–winning book Guns, Germs, and Steel. It’s a book about the ultimate material causes of human historical progress. Diamond traces proximate factors in Eurasian global domination like guns, steel, swords, disease, politics, and writing to ultimate factors like plentiful plant and animal speciation and the east-west axis of Eurasia that allowed easier spread of animal husbandry and agriculture, which in turn facilitated food surplus, storage, population density, politics, and technology. No matter what you think of Diamond’s position on geographical and material accident underlying all of human history, you might be struck by this single page—page 87—one of more than five hundred in the book. It rather stands in for the rest. Yes, of course there’s a large amount of detailed description on those five hundred pages, but the fundamental argument is here on this chart. I’d wager that W. W. Norton wouldn’t have published a laminated sheet with just page 87 on it, nor would that laminated sheet have won the Pulitzer Prize in nonfiction. Another example, one closer to home. On Wednesday evening before The Nonhuman Turn conference began, Tim Morton and I sat drinking beer at Von Trier’s on Farwell and North. My ten-year- old daughter started texting me: “Why r u in Milwaukee,” she asked. “Conference! Nonhuman Turn,” I responded. “Mmm,” she considered. “Weird, right,” I offered apologetically. “Yeah,” she said, adding an alien emoji to the exchange. “That’s nonhuman,” she clarified. Adding my own pictogram, a slice of cake, I responded, “So is this.” This may seem harmless enough to you, if perhaps also adorable (“my daughter’s first lesson in object-oriented ontology,” I called
Ian Bogost
it on Twitter). But there’s something quite serious going on here. This exchange goes further than most philosophy, because it takes a concrete situation and makes it manifest, in the moment, with only the tools I had on hand—my iPhone and its built-in emoji set. Sometimes I wonder: Why am I writing books when I could just write text messages to my ten-year-old? That may sound flip or glib, but what if we took the daily practice of philosophy seriously, not just the occasional chore of it? Or, one more: Tim Morton gave a rousing talk at The Non human Turn conference. I have no idea how it could be effectively reproduced in this book, in print, when it was so performative, so rhythmic and throbbing. Afterward, he fielded a number of questions about his rather arresting presentation. Most were questions about the form of the presentation: What is it that you just did? On the one hand, they are reasonable questions, and there’s no doubt that Tim’s presentation warranted them. But on the other hand, nobody would have thought to ask Steven Shaviro a question like, “I noticed you cited numerous philosophers by reading quotations from their books, which you interspersed with various commentaries about those quotes. Why did you do that?” We are stuck in our materials, and mostly we don’t realize it. Perhaps we are too obsessed with foregrounding political ideology even to notice the ideology of materials. But even if not, the ideology that is our materials is perhaps one to which we are even more blinded. Ideology critique demands that we take a closer look at what we take for granted. We ourselves offer one such object of study, by means of the tools we deploy for work like ideology critique.
X. Cows, Part 3 Writing in Wired magazine in December 2011, Jason Tanz told the story of Jamie Clark, a student and military spouse living on Ellsworth Air Force Base outside of Box Elder, South Dakota.5 She had made close friendships with her fellow Clickers. “I don’t meet a lot of people who discuss politics and religion and philosophy, but these people do, and I like talking to them,” she says. “I’d rather talk to my Cow Clicker friends than to people I went to school with
The Aesthetics of Philosophical Carpentry
for 12 years.” It’s a common refrain among dedicated Cow Clickers, who have turned what was intended to be a vapid experience into a source of camaraderie and creativity, Tanz summarized. He continues: It may be that Cow Clicker demonstrates the opposite of what it set out to prove and that social games, no matter how cynically designed, can still provide meaningful experiences. That’s how Zynga’s Brian Reynolds sees it. “Ian made Cow Clicker and discovered, perhaps to his dismay, that people liked it,” Reynolds says. “Who are we to tell people what to like?” Gabe Zichermann, a gamification expert, also dismisses Bogost’s critique of Zynga’s games. “Other gamers may think FarmVille is shallow, but the average player is happy to play it,” he says. “Two and a Half Men is the most popular show on television. Very few people would argue that it’s as good as Mad Men, but do the people watching Two and a Half Men sit around saying, oh, woe is me? At some point, you’re just an elitist fuck.” I had responded to this idea almost a year earlier, in a “rant” at the Game Developers Conference titled “Shit Crayons.” In it, I compared Cow Clicker players to the imprisoned Nigerian poet Wole Soyinka, who composed poems from his cell using whatever writing material he could find. How resilient is the human spirit that it withstands so much? No matter what shit we throw, nevertheless people endure, they thrive even, spinning shit into gold. The Cowpocalypse finally arrived in the evening of September 7, 2011. Frantically working at a makeshift desk in my den, I flip a few bits I had set up weeks earlier, and all the cows disappear— raptured. In their place, just empty grass. Tanz explains better than I could: They have been raptured—replaced with an image of an empty patch of grass. Players can still click on the grass, still generate points for doing so, but there are no new cows to buy, no mooing to celebrate their action. In some sense, this is the truest version of Cow Clicker—the pure, cold game
Ian Bogost
mechanic without any ornamentation. Bogost says that he expects most people will “see this as an invitation to end their relationship with Cow Clicker.” But months after the rapture, Adam Scriven, the enthusiastic player from British Columbia, hasn’t accepted that invitation. He is still clicking the space where his cow used to be. After the Cowpocalypse, Bogost added one more bedeviling feature—a diamond cowbell, which could be earned by reaching 1 million clicks. It was intended as a joke; it would probably take 10 years of steady clicking to garner that many points. But Scriven says he might go for it. “It is very interesting, clicking nothing,” Scriven says. “But then, we were clicking nothing the whole time. It just looked like we were clicking cows.”
XI. Idiots Despite co-organizing it, I was unable to attend the third object- oriented ontology symposium, held in September 2011 at the New School in New York City. Georgia Tech had asked me to attend the World Economic Forum’s meeting of the New Champions in Dalian, China, and I couldn’t refuse. I stopped over in Seoul on the way, where in an online chat I lamented with Tim Morton having to miss the triple-O event. Tim suggested I make a video they could play in my absence, and despite massive jetlag I put together a short visual essay on the twentieth-century street photographer Garry Winogrand. Among the surprising benefits of making a five-minute video in absentia instead of delivering a three thousand-word paper, a popu lar photography site featured the video and it was viewed many thousands of times. One such accidental ontologist was Tod Papageorge, director of the Yale School of Art’s graduate photography program and a longtime friend and scholar of street photography in general and Winogrand’s work in particular. Papageorge sent me a 1974 lecture by Winogrand that included this nugget of wisdom: “A photograph is the illusion of a literal description of what the camera saw. From it, you can know very little. It has no narrative
The Aesthetics of Philosophical Carpentry
ability. You don’t know what happened from the photography. You know how a piece of time and space looked to a camera.”6 It’s a sentiment that bears fruit well beyond photography. The practice of alien phenomenology I call “metaphorism” involves amplifying rather than reducing distortion, capturing the metaphori cal relation between objects by characterizing their perceptions through imperfect, speculative rendition. This is the gentle tragedy of carpentry, which Hugh Crawford has called “working like an idiot”: in doing what we cannot, we nevertheless must strive to make something. With enough effort and practice and attention, we can even make things that are not just sufficient but also beautiful. Winogrand said that still photography is the clumsiest way to exercise imagination. “Dali can have a melted watch anytime he wants,” he explained. “It’s tantamount to driving a nail in with a saw, when you can use a hammer.” Carpentry is worse than this. Carpentry is the process of driving a nail in with a cup of frozen custard. I want to put my custard where my hammer is, so to speak. I have many carpentered projects planned and in progress, using many different materials. But computers and cows notwithstanding, I still fancy writing—but real writing, writing where the writing matters, not just the written matter. Writing that’s not sold in bulk to a tenure and promotion committee, but pruned like bonsai. If I were really serious about the claims I made about carpentry in Alien Phenomenology, then I shall have to try to make good on them. As such, the work I want to do with objects is metaphoristic, not critical. I want to write well rather than write to completion. We don’t have to give up writing to be philosophical carpenters. And as I think about being a philosopher—the kind who writes, anyway, at least some of the time—I realize that I can’t currently imagine writing philosophical arguments or treatises or positions. Fault me for it if you’d like, but I just don’t want to interpret Whitehead or Rancière. What if we took a break from it, from philosophical history for a while? What if we stopped making arguments? One day, I hope this might be philosophy. I hope I might write some of it, and that you might read it. If that hope conflates philoso phy with poetry or fiction, then so be it. Plato was wrong about poetry anyway.
Ian Bogost
Notes 1. James Wood, “The Slightest Sardine,” review of The Oxford English Literary History. Vol. XII: 1960–2000: The Last of England?, London Review of Books, May 20, 2004, 11–12, http://www.lrb.co.uk/. 2. Levi R. Bryant, “You Know You’re a Correlationist If . . . ,” Larval Subjects (blog), July 30, 2010, www.larvalsubjects.wordpress.com. 3. Robert Frodeman, “Experiments in Field Philosophy,” The Stone blog, New York Times, November 23, 2010, http://opinionator.blogs .nytimes.com/. 4. Carolyn Kellogg, “Little Books: An Airplane Reader,” Jacket Copy column, Los Angeles Times, December 20, 2011, http://latimesblogs .latimes.com/jacketcopy/. 5. Jason Tanz, “The Curse of Cow Clicker: How a Cheeky Satire Became a Videogame Hit,” Wired, December 20, 2011, www.wired.com/ magazine/. 6. UCR/CMP Podcasts: Collections Series— Garry Winogrand, University of California-R iverside, California Museum of Photography, April 3, 2008, https://itunes.apple.com/us/podcast/ucr-california- museum- photography/. Also available from http://cmplab16.ucr.edu/ podcasts/2008.0009.0003/UCR_CMP_Podcasts_CollectionsSeries2.m4a.
9 5 0
Our Predictive Condition; or, Prediction in the Wild Mark B. N. Hansen
The Politics of Imminent Threat The February 2013 confirmation hearings for John Brennan, President Barack Obama’s nominee for CIA director, rekindled—indeed significantly ramped up—comparisons between the administration’s recent policy decisions concerning collection of personal data and the fantasy of “precrime” made famous by Steven Spielberg’s 2002 film Minority Report. Already in December 2012, following a Wall Street Journal article detailing Eric Holder’s March 2012 decision to grant the National Counterterrorism Center (NCTC) broad rights to collect and archive private data from individual citizens,1 Jesseyln Raddack, national security and human rights director at the whistleblower nonprofit Government Accountability Project, likened the administration’s move to the fantasy at the heart of Spielberg’s film: In the movie Minority Report, law enforcement has an elite squad called “Precrime,” which predicts crimes beforehand and punishes the guilty before the crime has ever been committed. In yet another example of life imitating art, a blockbuster Wall Street Journal article describes how the National Counterterrorism Center (NCTC)—an ugly child of the Director of National Intelligence—can now examine the government files of ordinary, innocent U.S. citizens to look for clues that people might commit future crimes.2
9 101 0
Mark B. N. Hansen
In the wake of the Brennan hearings, the talk of precrime was again all over the media, only now with a more precise focus— the administration’s position on drone killings—and a far more astute understanding of the logic informing this position. Particu larly incendiary about Brennan’s testimony were his clarification of the underlying logic for preemption and his specification of the concept of “imminent threat.” What Brennan makes clear in his testimony is that, when assessing the danger of terrorist threats and of individuals involved in such threats, the NCTC does not operate like a court of law. Indeed, far from it, for, rather than making decisions about past guilt on the basis of past information, the NCTC must make decisions about future guilt, that is, guilt for activities that have not yet occurred, and must do so on the basis of a set of factors bearing on the likelihood of such activities indeed occurring—factors that include the seriousness of the threat, the temporal window of opportunity for intervention, and the possibility of reducing collateral damage—which taken together present preponderant or overwhelming evidence of the “imminence” of the threat at issue. Brennan’s language unequivocally marks the extrajudicial terrain of the question: john brennan: Senator, I think it’s certainly worthy of discussion. Our tradition—our judicial tradition is that a court of law is used to determine one’s guilt or innocence for past actions, which is very different from the decisions that are made on the battlefield, as well as actions that are taken against terrorists. Because none of those actions are to determine past guilt for those actions that they took. The decisions that are made are to take action so that we prevent a future action, so we protect American lives. That is an inherently executive branch function to determine, and the commander in chief and the chief executive has the responsibility to protect the welfare, well-being of American citizens. So the concept I understand and we have wrestled with this in terms of whether there can be a FISA-like court, whatever—a FISA-like court is to determine exactly whether
Our Predictive Condition
or not there should be a warrant for, you know, certain types of activities. You know . . . angus king: It’s analogous to going to a court for a warrant—probable cause. . . . (crosstalk) brennan: Right, exactly. But the actions that we take on the counterterrorism front, again, are to take actions against individuals where we believe that the intelligence base is so strong and the nature of the threat is so grave and serious, as well as imminent, that we have no recourse except to take this action that may involve a lethal strike.3 With its clear distinction between the act of judging past involvement in crimes and the act of warding off the imminent threat of future crimes, Brennan’s comments effectively position the NCTC as a latter-day Precrime Division: while courts “get involved when people have already committed crimes,” as one astute blogger puts it, the NCTC must reserve for itself, and the government it represents, the right to declare people “imminent threats”—targets for killing—“even before they’ve committed a crime.”4 The distinction Brennan draws between judicial process and military force would seem to place his position broadly within the logic of preemptive power that Brian Massumi has developed to characterize post-9/11 American foreign policy. For Massumi, this logic involves a fundamentally altered relationship to an objective cause: Deterrence revolved around an objective cause. Preemption revolves around a proliferative effect. Both are operative logics. The operative logic of deterrence, however, remained causal even as it displaced its cause’s effect. Preemption is an effective operative logic rather than a causal operative logic. Since its ground is potential, there is no actual cause for it to organize itself around. It compensates for the absence of an actual cause by producing an actual effect in its place. This it makes the motor of its movement: it
Mark B. N. Hansen
converts an absent or virtual cause really, directly, into a taking-actual-effect.5 Like preemptive power on this account, the right to kill that Brennan claims for the NCTC would appear to exercise its dominion where there are no actual causes yet, where “causal operative logic” has yet to become applicable. Understood on the logic of preemptive power, “imminent threat” would thus play the role of “virtual cause”: “a futurity,” as Massumi puts it, “with a virtual power to affect the present quasicausally.”6 What lends the virtual cause its force is the operation of the “unknown unknown,” understood as a kind of ultimate final cause for the threat at issue here. This absolute unknowability renders the threat an ontological problem—the problem of how to construct a relationship to what, from the standpoint of the present, remains “objectively uncertain”: Like deterrence, it [preemption] operates in the present on a future threat. It also does this in such a way as to make that present futurity the motor of its process. The process, however, is qualitatively different. For one thing, the epistemology is unabashedly one of uncertainty, and not due to a simple lack of knowledge. There is uncertainty because the threat has not only not yet fully formed but, according to Bush’s opening definition of preemption, it has not yet even emerged. In other words, the threat is still indeterminately in potential. This is an ontological premise: the nature of threat cannot be specified.7 Despite a certain terminological wavering on Massumi’s part, the argument here mobilizes what appears to be a clear, categorical distinction between two models of causality, or perhaps more precisely, between a model of causality and a model of absolute noncausality (absolute refusal of causality).8 To the extent that it operates in relation to—indeed, in virtue of—an unknowability that can never be overcome, preemptive power exists orthogonally to causality, and operates—or claims to operate—independently of and beyond its scope.9 The threat that it claims to address is beyond
Our Predictive Condition
prediction (it “has become proteiform and it tends to proliferate unpredictably”); and it exists, if indeed it can be said to exist at all, as an empty—and thus infinitely fulfillable—form of the future (it is a “time form: a futurity” and is, as such, “nothing yet—just a looming”).10 Yet despite its absolutely central role as the kernel of Massumi’s account of preemptive power, the category of the unknown unknown receives little if any direct attention in his meditation. Invoking “the Architect,” Donald Rumsfeld, Massumi does tell us that “we are in a world that has passed from the ‘known unknown’ (uncertainty that can be analyzed and identified) to the ‘unknown unknown’ (objective uncertainty).”11 But what exactly objective uncertainty is—other than the negation of any figure of knowledge—remains uninterrogated. Indeed, it would appear that Massumi simply ratifies Rumsfeld’s analysis—and the operation of the unknown unknown as an absolute final cause—in order to focus on the intricacies of virtual causality. What enters the scene is fear as the efficiency of a quasi-causal logic that effectively substitutes for the causal efficacy of the real: “Threat is the cause of fear in the sense that it triggers and conditions fear’s occurrence, but without the fear it effects, the threat would have no handle on actual existence, remaining purely virtual.”12 Preemption mobilizes the affect of fear “to effectively trigger a virtual causality.” Affect, accordingly, emerges as the very medium or materiality of the threat’s exercise of power: “Preemption is when the futurity of unspecified threat is affectively held in the present in a perpetual state of potential emergence(y) so that a movement of actualization may be triggered that is not only self-propelling but also effectively, indefinitely, ontologically productive. . . .”13 From these descriptions, we can see that fear is the strict correlate of the unknown unknown: it comprises the very mechanism by which objective uncertainty can become a quasi-causally efficacious political force. The crucial question we must ask Massumi does not concern the coherence of this virtual quasi-causal logic, but rather the effect of positioning it as a substitute for the causal efficacy of the real. It is one thing to argue that Rumsfeld’s rhetoric of the unknown unknown made it possible to hijack this causal efficacy and to willfully impose a factually (or causally) “unjustified” imperialistic program.
Mark B. N. Hansen
But it is quite another to claim that this program—which certainly does have real causal consequences—operates at the same level as, and indeed, as an alternate modality of, the causal efficacy of the real. Because it effectively lends credit to the inaugural move of the logic of preemption—the decision to treat absolute uncertainty as source for causal efficacy—this latter position runs the risk of ratifying the Bush doctrine of preemption. Against such a position, we need to advance two theses: 1. There is no unknown unknown in material reality: the unknown unknown is an ideological mystification of a geopolitical phenomenon, terrorism, that gains whatever traction it has from its capacity to pose extreme difficulties for extant epistemologies. 2. There is a categorical distinction between empirical evidence and affective pseudo-evidence, and we must defend the role and autonomy of the former, and specifically its capacity to debunk any and all invocations of absolute uncertainty, against strategic attempts to subsume it under the umbrella of fear (or any other affectively sustained quasi-causality).
The Return to Reality Let us now return to Brennan’s testimony, which, I emphasize, breaks with the Bush doctrine of preemptive war over the very point at issue here: the role and existence of the unknown unknown. Although he cannot but operate within a military scenario and world situation that, to be sure, was in some sense created quasi-causally by the Bush administration’s false assertion of a link between Iraq and Al-Qaeda, Brennan does not make any appeal whatsoever to absolute uncertainty in his claim for the right to kill. Rather, he cites more proximate factors that, even if they do not establish a clear and direct causal lineage from past to future (i.e., evidence of involvement in actual criminal activity), nonetheless furnish such a “strong . . . intelligence base” that, combined with other factors, including the gravity of the threat and its imminence, justify—and indeed, leave “no recourse except”—the use of lethal force.
Our Predictive Condition
Among other things, what this break with the Bush-era logic of preemption signals is a certain return to reality: decisions concerning the targeting of individuals for drone killing will be made, not in virtue of an ultimate, and ultimately unknowable, source, but rather on the basis of as thorough an analysis as the given time frame permits of all available data concerning the situation at issue.14 Such analysis is, to be sure, causal, but with a complexity that eschews all notions of simple linear causality and that embraces indeterminacy-uncertainty-unknowability as the very aspect of reality that makes causal analysis necessary in the first place. Rather than focusing on the identification and isolation of a single cause or causal thread that makes its appearance at the macrolevel of experience, as a clearly delimitable “event,” such analysis seeks to identify propensities of situations on the basis of an attention to the “totality” of a situation. Thus rather than beginning with and orienting itself in relation to the notion of actions that may be committed against citizens of the United States, such analysis operates probabilistically on the matrix of the entirety of the data available at a given moment in time that pertains to a very general situation, say terrorist activity in a given region. And rather than seeking to establish guilt for past actions, such analysis seeks to identify future windows of space-time in which terroristic activity is likely to occur and then to find ways of linking such future propensity to individuals who can be targeted for killing in the present. When Brennan attributes the distinction between military and judicial judgment to a difference in temporal modality, he doesn’t emphasize strongly enough, or indeed really at all, what I take to be the most fundamental element of the distinction: namely, the fact that predictive analysis of data for assessment of future probabilities generates information that is not there independently of the analysis. Such a generative dimension is a generic feature of data mining, as communications scholar Oscar Gandy Jr. observes: “Data mining is said to differ from ordinary information retrieval in that the information which is sought does not exist explicitly within the database, but must be ‘discovered.’ In an important sense, the relationships between measured or ‘observed’ events come to stand as a ‘proxy’ for some unmeasured relationship or influence.”15
Mark B. N. Hansen
The generative dimension that Gandy discovers in conventional data mining procedures becomes far more fundamental in the case of military data mining, where what is being mined is information relevant to events that have not yet occurred: whereas conventional, retrospective data mining involves analysis of past behavioral data to determine probabilities of future behavior, military, prospective data mining seeks to generate likely future scenarios based on the analysis of presently identifiable propensities extending into the future. To the extent that they operate to predict factors of situations that have not yet occurred, such future-oriented, prospective procedures of data mining and predictive analytics might be said to fill in the void of the “unknown unknown.” In contrast to the Bush administration’s logic of preemption, and also to Massumi’s analysis of it, they address the uncertain not as an ultimate final cause, but as a domain open to probabilistic analysis—as a domain that can be partially known. Thus, rather than the virtual production of a quasi-cause that, as Massumi puts it, “compensates for the absence of an actual cause by producing an actual effect in its place,” what is at issue in the logic of executive-military judgment sketched out by Brennan is an instrumental decision concerning an imminent threat that is real precisely to the extent that it is known—but that is known, let us be clear, not as something that already exists, but as something predictably likely to come to pass. This kind of reality finds its philosophical name in Alfred North Whitehead’s concept of “real potentiality,” which conceptualizes the operation of the future in the present: “The reality of the future,” says Whitehead, “is the reality of what is potential, in its character of a real component of what is actual.”16 I return to Whitehead’s concept in some detail later, but let me at this point simply underscore the broad scope of real potentiality’s sway: real potentiality does not qualify a given agent, but the diffuse and “total” potentiality of the world at a given moment. In this sense, Whitehead’s concept can help us appreciate something fundamental about the prospective model of data mining and predictive analytics: its predictive power is a direct function of its broad grasp. Not only does prospective data mining always address a “total situation,” but it does so by analyzing data at a myriad of levels (or scales); it is only
Our Predictive Condition
subsequently, as the result of an inductive process and as a function of probability, that such broad-ranging, microscalar analysis congeals into an identifiable macroscale phenomenon, an “event.”17 That this meditation on broadened imminence remains firmly anchored in real and concretely (if probabilistically) accessible factors marks its resistance to any assimilation into an event logic: today’s imminent threat differs from the quasi-causal absolute threat of preemption precisely because it can be qualified probabilistically, precisely because one can assess the degree of certainty that it will materialize. And what is qualified—what actually comprises the threat—is not the likelihood of a particular event, of a single actual cause, coming to pass, but a far more complex and diffuse calculus of propensities concerning a myriad of factora that can only be known insofar as they can be qualified probabilistically. It is these micro-propensities, not the events they may go on to inform, that are the objects of probabilistic modeling.
Rethinking “Precrime” With this shift in the structure, and indeed, in the ontology of threat comes a displacement of fear in favor of evidence. If that displacement is what Brennan’s testimony expresses, as I am suggesting here, it inaugurates a model of “precrime” that breaks with the legacy of the notion, as it extends from Philip K. Dick’s original 1956 story, through Spielberg’s 2002 film, to contemporary theorizations of post-9/11 political culture. Central to all of these expressions of the preemptive logic of “precrime” is a focus on the macro scale event: in both the story and the film, what the “precogs” see is the actual scene of a murder, perpetrated by an individual in a specific place and at a specific time. When Richard Grusin invokes Minority Report—and the future vision of the precogs—as an allegory for an operation that he calls “premediation,” he takes on board this event-focused legacy: Steven Spielberg’s Minority Report (2002), released less than a year after 9/11, epitomized the logic of premediation that had been intensifying over the past decade and more. . . . In Minority Report, rather than capturing past
Mark B. N. Hansen
neural experience for playback in the future [as in Kathryn Bigelow’s 1995 film, Strange Days], the technology captures “precognitions” of the future for playback in the present— for the purpose of preventing the recorded events from becoming actual history, to prevent the future from becoming the past.18 Grusin understands the recordings of the precogs’ visions as ele ments within a larger network aimed at mediating—or more precisely, premediating—future events prior to their occurrence. What makes these visions appropriate for this task is their status as recordings that are identical technically to other forms of cine matic recording: “Technically the recording device [in Minority Report] would seem to work very similarly to the wire in Strange Days. Even though the device in Minority Report is supposed to be recording murders that will be committed in the future, the sensory experience that it records is recently past experience, that is, the past mental experience of the three precogs.”19 With this astute observation, Grusin helps us to understand that the mechanism for predicting the future within the diegetic world of the film is not the media apparatus itself but some magical, never fully clarified power of the precogs to see into the near future, to experience the near future as something implicated within the present. On this score, the vision of precrime depicted in Spielberg’s Minority Report stands opposed to the vision laid out by Brennan in his recent testimony: whereas the former indulges a fantasy of cognitive pre-anticipation of the future, the latter is rooted in the power of large-scale data analysis to reveal partial propensities stretching forward from the present world to (differently weighted) possible future worlds. And if the former finds its appropriate aesthetic analogue in the media technology of cinema—which is perfectly equipped to present the precogs’ visions as recordings of “future events”—the latter is rooted in the media technics of predictive data analysis that operates at a level of granularity and fragmentation and with an ineliminable degree of uncertainty antithetical to consolidation in the form of the “future event.” The discoveries of predictive analytics are discoveries of micrological propensities that are not directly correlated with human understanding and
Our Predictive Condition
affectivity and that do not by themselves cohere into clearly identifiable events: such propensities simply have no direct aesthetic analogue within human experience. Given the broader political and (especially) technical context within which it was made, Spielberg’s choice to retain the central conceit of Dick’s 1956 story—the magic element provided by the precogs—immediately labels the film as an allegory. Far from being a diagnosis of the system of “preventive prosecution” that the Bush administration would soon adopt, the film presents a more open vision of a future in which the political mandate for perfect preemption has simply, if magically, been translated into reality. This openness makes Minority Report an ideal exemplification of the logic of premediation. Indeed, the fantasy of precognition is the perfect allegorical expression of the imperative to premediate that, in Grusin’s understanding, structures post-9/11 media culture: like the precog visions, the incessant production of mediations of possible futures is designed to ensure that the “future can be remediated before it happens.”20 With its basis in data mining and predictive analytics, the recent expansion of the government’s right to kill marks a clear departure from the immunologic of premediation. What is at stake in Brennan’s claim for executive authority over drone killings—especially when these target individuals who have yet to commit crimes—is the causal efficacy of data and the “right” to base life-or-death decisions on it. Where Grusin’s premediation fills in (or covers over) the uncertainty of the future with a torrent of mediation, contemporary precrime mobilizes the weapons of prediction in order to face that uncertainty head-on. Clearly, the two “logics” address our contemporary situation in starkly divergent ways. This divergence surfaces in Grusin’s characterization of premediation as akin to a video game: More like designing a video game than predicting the future, premediation is not concerned with getting the future right, as much as with trying to map out a multiplicity of possible futures. Premediation would in some sense transform the world into a video or computer game, which only permits certain moves. . . . Although within these
Mark B. N. Hansen
premediated moves there are a seemingly infinite number of different possibilities available, only some of those possibilities are encouraged by the protocols and reward systems built into the game. Premediation is in this sense distinct from prediction. Unlike prediction, premediation is not about getting the future right. In fact it is precisely the proliferation of competing and often contradictory future scenarios that enables premediation to prevent the experience of a traumatic future by generating and maintaining a low level of anxiety as a kind of affective prophylactic.21 With its sustained clarification that premediation is not about “getting the future right,” this passage could not express more clearly and unequivocally the fact that premediation operates exclusively at the level of ideology. Indeed, premediation marks what we might well consider a new stage in the operation of ideology, one that marks the transformation of affectivity into the very engine of ideology. In this respect, premediation is remarkably similar to Massumi’s preemption: in both cases, affectivity (whether low- level anxiety or free-floating fear) comprises a resource allowing action to be taken in the present. That is why Grusin and Massumi come to the same conclusion: the messy operation of material reality—an operation that takes place to a large degree beyond the grasp of knowledge—is subordinated to the quasi-causality produced through a sanitized, event-focused representation of the future designed to forestall the threat of the unknowable. Whether such forestalling occurs through the incessant proliferation of preemptive measures or the frenzied multiplication of premediated scenarios is ultimately beside the point: the only thing that really matters is the decision to jettison the imperfections of causal prediction in favor of a closed, quasi-causal loop that, ultimately, simply replaces causality with ideology. Once this decision is made, facts about reality become strangely irrelevant, as does the very notion that the future in fact bears differentially on our predictions in the present (or, using Grusin’s terminology: that one can get the future at least partially right). By substituting the cultural logic of premediation for the material logic of prediction, Grusin’s analysis does not simply deprive
Our Predictive Condition
us of important tools for diagnosing our contemporary situation; more significantly, it tacitly makes common cause with a far broader operation of data gathering and predictive analytics on the part of government, military, and private industry that is predicated on a functional splitting of operationality from representation, and an obfuscation of the former by the latter. As I have argued elsewhere, today’s data industries operate on the basis of a system of information gathering and analysis designed to leave citizen-consumers out of the loop.22 A case in point is contemporary social media, where the affordances of particular platforms are ultimately nothing other than “lures” to generate activity, and hence data, that fuels a predictive engine for the production of surplus value. The key point here is that this “system” combines ideology and operationality in order to secure ever more effective command over the future, or more precisely, over the future’s agency in the present. Viewed against this broad backdrop, premediation can be seen to perform a dual role: on one hand, understood as a desire that stems ultimately from the uncertainty of the future, it operates as stimulus—as lure—for the production of ever more mediation; and on the other, understood as a proliferation of media events that take the place of causal analysis, it functions to obfuscate the underlying reality, namely that policy decisions are being made on the basis of predictive analysis, yielding what must ultimately remain uncertain, probabilistic judgments concerning the future.23 What this means is that the two logics at issue here—of premediation and of prediction—are not incompatible, or rather only become incompatible when placed on the same level, when offered as alternative causal (or quasi-causal) logics. By shifting the terrain upon which the notion of precrime gains meaning, Brennan’s testimony offers an important opportunity to think beyond the ideological obfuscations perpetrated by the logics of premediation and preemption. Expanding on the November 2011 white paper, Brennan’s claim for executive right to kill in cases involving future crimes is rooted not in a closed, circular, self- creating quasi-causal loop but ultimately in an argument about the justification of lethal force on the basis of predictively secured, though intrinsically uncertain, and inherently partial, probabilities. There are certainly serious questions that will need to be asked
Mark B. N. Hansen
about such justification, as well as political resistance that will need to be wielded to prevent potential executive overreach. But the fact remains that the terms of today’s discussions of “precrime” mark a wholesale break with the fetishizing of the unknown unknown that occurs, whether for strategic or sincere reasons, when it is made to function as virtual final cause of the politico-cultural logics of preemption and premediation.
Prediction as Access to Worldly Sensibility On this score, we might well turn from Minority Report to the recent television drama, Person of Interest, to find an appropriate allegory for our contemporary predictive condition. Like Minority Report, Person of Interest centers on the use of data to predict and ultimately shape the happening of the future; yet whereas the film features three human precogs producing dreamlike visions of future crimes, the television series focuses on the output of a mysterious machine that processes all of the data generated by computational sensors, cell phones, and Internet activity in order to predict the involvement of individuals in situations that will somehow involve murder. For my purposes here, two main factors distinguish Person of Interest from its precursor. First, the depiction of the “machine” at its core explicitly recognizes the partiality of intelligence generated through predictive analytics and takes this as a key element: it requires the involvement of human protagonists in the ongoing development of events out of diverse data concerning future propensities. Second, the data generated by the machine is portrayed as absolutely inscrutable to human understanding and subject to no protocols of hermeneutic decipherment; this data function simply as a spur to solicit the future-affecting involvement of human actors. What these two differences underscore is Person of Interest’s explicit concern with the predictive condition of contemporary life: not only does it allegorize this condition in the form of superhero-like fantasy resolutions of predicaments involving individuals dehumanized by twenty-first-century capitalism, but it does so always in a way that embraces the uncertainty inherent to the predictive logic informing the operations of today’s military-entertainment complex.
Our Predictive Condition
Reflecting the spillover from military to civilian surveillance documented by the November 2011 white paper, the show’s diverse narratives all involve some aspect of the generalization of data mining and predictive analytics at issue in our current cultural moment: although the machine was built to identify potential terrorists, the machine’s architect, a character named Finch, built in a “back door” allowing him access to the extraneous data produced by the machine—data that, while irrelevant for the machine’s military purpose, predict the occurrence of crimes in the mundane domain of everyday life. In this sense, the show would appear to allegorize not simply the predictive condition of life during wartime, as does a show like Homeland, but the more general predictive condition of life lived within the complex networks created by twenty-first-century media. That is why Person of Interest, despite being in development prior to March 2012, could be taken for a direct response to the expansion of the government’s right to gather private data. In her December 2012 exposé of the leaked NCTC Guidelines, Wall Street Journal reporter Julia Angwin characterizes the policy shift at issue here as a marked “departure from past practice, which barred the [NCTC] from storing information about ordinary Americans unless a person was a terror suspect or related to an investigation.” The new rules “now allow the little-k nown National Counter terrorism Center to examine the government files of US citizens for possible criminal behavior, even if there is no reason to suspect them.” Even more shockingly, the rules allow the NCTC to “copy entire government databases— flight records, casino- employee lists, the names of Americans hosting foreign-exchange students and many others. The agency has new authority to keep data about innocent US citizens for up to five years, and to analyze it for suspicious patterns of behavior. Previously, both were prohibited.” The new policy, as Angwin clearly discerns, effectively ensures that every American is treated as a potential “imminent threat”: not only can information be used in the present “to look for clues that people might commit future crimes,” but it is stored expressly as a source of potential, which is to say, with a view to its future relevance. As Angwin astutely notes, this amounts to holding individuals responsible in the present for possible activities—possible
Mark B. N. Hansen
crimes—that do not exist as such in the present, for crimes that, as it were, have yet to happen: “A person might seem innocent today, until new details emerge tomorrow.”24 Along with the ubiquitous operation of data tracking and gathering that increasingly informs our mundane uses of computational technologies, this expanded scope of governmental data gathering creates the predictive condition allegorized by Person of Interest. What is striking about the show, however, and what distinguishes it from Minority Report and a long history of dystopian meditations on the erosion of privacy, is the positive spin it puts on our predictive condition: despite its operation beyond the bounds of human understanding, data are figured in the show as the means for justice to be served at the level of everyday life, in relation to ordinary persons. I have elsewhere sought to characterize the use of data gathering and predictive analytics for human enrichment in terms of media pharmacology (pharmakon, Greek for “poison” and its “antidote”), and specifically as the pharmacological recompense for the marginalization of human modes of experience (consciousness, sense perception, etc.) that ensues with the advent of our predictive condition.25 To do so, I correlate this recompense with the expanded domain of sensibility—what I have called “worldly sensibility”—that is made accessible to us by twenty-first-century media. Specifically, technical access to and production of data about levels of experience that remain outside our direct experience, but that nevertheless affect our experience, give us the potential to gain an expanded understanding of our own experience and its implication within larger worldly situations. At the heart of this argument, as I announced previously, is a critical engagement with the philosophy of Alfred North Whitehead that centers on Whitehead’s environmental approach to process and the fundamental role that data play in his account of the world’s becoming. This engagement aims to radicalize Whitehead’s own radicalization of perception. Accordingly, whereas Whitehead seeks to re-embed sense perception in the broader vectors of “causal efficacy,” and introduces “non-sensuous perception” (perception “in the mode of causal efficacy”) to do so, I suggest that perception in both of its modes arises within and out of a broader environmental surround that remains to a great extent
Our Predictive Condition
opaque to its regimes of presentation. Twenty-first-century media, including data gathering and analysis, furnishes a crucial and largely unprecedented means to access this broader environmental surround—the superjectal subjectivity of objectified concrescences of “data”26—and to translate its data (what I call “data of sensibility”) into a form that can be presented, or more precisely “fed-forward,” into (future) perceptual consciousness.27 This focus on the broad, or as Whitehead conceives it, the “total” environmental situation informing every actual occasion shifts the terrain on which media has long been theorized; specific ally, it displaces the prosthetic narrative of media technology—a narrative that stretches from Plato to McLuhan and most recently to Bernard Stiegler—in favor of a model of technical distribution that dislodges perceptual consciousness and embodiment from their privileged position as exclusive synthesizers of media’s experiential impact. French writer Éric Sadin aptly characterizes this displacement as an “anthropological turning point” in our species- constitutive relation with technology: Historically, the relation to the technical object has been instituted and developed on the inside of a distance that aims to make good—“ from the outside”—deficiencies of the body and to amplify its physical capacities; our period marks the end of this distance, to the benefit of an ever more closed-in proximity. A displacement of the conception relative to techne is de facto called for, the latter no longer being envisaged, following the Western philosophical tradition, as a palliative and “prosthetic” production, or again, in the more informed manner described and analyzed by Leroi-Gourhan, as a relation of dynamic intermixture between instruments and corporeality. Techne must from now on be understood as an enveloping of virtualities offered to the body, which constitutes the fundamental anchor point for present and future technological evolutions, and which induces an automatized and fluid relation to the milieu.28 Sadin’s claim lends forceful expression to the contemporary shift in our experience of technics: we no longer confront the technical
Mark B. N. Hansen
object as an exterior surrogate for consciousness or some other human faculty, but rather as part of a process in which technics operates directly on the sensibility underlying—and preceding— our corporeal reactivity and, ultimately, our conscious experience. To be even more precise: the “reactive corporeity” that Sadin theorizes engages contemporary technics—data gathering, microcomputational sensing, predictive analytics—as a “radical exteriority” within the interiority of experience.
Propensity, or “Real Probability” To the extent that this data—“ data of sensibility” that are accessible only through technical means—are in fact data about proba bilities bearing on future occasions, they call for a fundamental shift in how we approach media: we must cease focusing on acts of discrete agents and seek to understand tendencies for situations to happen, tendencies that are informed by a wide swathe of (mostly) environmental data, all of which is gathered incrementally and molecularly. In his recent account of tracking technologies and urban life, media artist and critic Jordan Crandall perfectly captures this priority of tendency over actuality: not only must discrete actions be redescribed as “performatively constituted action-densities, inferred through calculative, predictive or pro-active operations,” “actuality” itself must be understood to be “conditioned by tendency” and agency to be “embroiled in a calculative, mobilizing externality” in which it “pushes and is pulled outward, as if seeking to become the predisposition that it courts.”29 This prioritizing of tendencies over actions is precisely what allows predictive analytics to generate meaningful probabilities even when “datasets” cannot be totalized. Scenarios in which the total situation informing an occasion cannot be known yield open- ended probabilities or “probabilities in the wild”—probabilities that differ categorically from the classical a “calculus of probability” with its basis in the equipossibility of outcomes. To grasp this distinction, one need only consider the famous definition offered by Pierre-Simon Laplace: “The probability of an event is the relation of the number of favorable outcomes to the total number of possible outcomes considered to be equiprobable.”30 As Jean-Rene
Our Predictive Condition
Vernes explains, the calculus of probabilities hinges on two essential principles: the possibility of defining equiprobable outcomes and the independence of each move. In his study of “aleatory reason,” the aptly titled Critique of Aleatory Reason, Vernes goes on to characterize the knowledge of probability as a form of a priori knowledge—the “a priori possible”—that charts a very different course from Humean skepticism than the Kantian legacy we know all too well. Central to the tradition theorized by Vernes is the notion that the calculus of probability correlates with an “object of immediate certainty”: “In the case of a well-made die, the six faces have an identical probability of appearing, because they appear identical in the representation that we have of them. They are interchangeable. . . . The link between the structure of the die and the series of results is a logical link, although of a different kind from what is typically understood under the term.”31 The key point here is that knowledge of probability is a priori: it is rooted in an a priori understanding of the equipossibility of each outcome. To this a priori knowledge of equipossibility corresponds the experiential notion of “frequency”: a frequency results when series of moves tend toward a limit value. Although they are experiential verifications of something known a priori, frequencies remain bound to the two essential conditions of the calculus of probabilities: they are a function of equipossibility and independence of outcomes. It goes without saying that the operation of predictive analytics concern open and incomplete “datasets” that are vastly more complex, and less discretely articulated, than the six possible outcomes of a dice roll. Indeed, the passage from the rarified domain of pure chance (Vernes’s a priori aleatory reason) to the real world would seem to yield an ontological transformation of probability itself: probability ceases to function on the basis of mere possibilities and instead comes to operate as the index of “real propensities.” Such a transformation is precisely what is at stake in Karl Popper’s “propensity interpretation of probability.” As Popper explains in A World of Propensities, “There exist weighted possibilities which are more than mere possibilities, but tendencies or propensities to become real [or, as I would prefer to say, that are real]: tendencies or propensities to realize themselves which are inherent in all possibilities in various degrees. . . .” Still more emphatically, Popper
Mark B. N. Hansen
claims that propensities “are not mere possibilities but are physical realities. They are as real as forces, or fields of forces. And vice versa: forces are propensities.” “We live in a world of propensities,” and in real world situations, Popper concludes, there simply are no equal possibilities—and indeed, no meaning whatsoever to the notion of equal possibility; hence, “we simply cannot speak here of probabilities in the classical number sense.”32 Whatever explanatory and causal value predictive analytics of large datasets have is, I suggest, ultimately rooted in this onto logical transformation whereby probabilities are understood to be expressions of the actual propensity of things. In this respect, Popper’s conceptualization of probability as propensity provides a bridge to link Whitehead’s account of worldly propensity as causal efficacy, as an expression of the present’s impinging on the future (or the future’s operating in the present), with the predictive condition informing contemporary life. Indeed, Popper’s conceptualization helps us translate Whitehead’s understanding of data as dynamically oriented to the future into the terminology of proba bility theory so central to contemporary technocultural mediations of worldly process. More than any other element of Whitehead’s neutral philoso phy of experience, it is the probabilistic underpinnings of “real potentiality”—the way that present data already implicate their potential future power in its present operationality—that makes him the preeminent philosopher of twenty-first-century media. “Real potentiality” designates the potentiality of the settled universe that informs the genesis of every new actuality along with the incessant renewal of the “societies” that make up the world’s materiality (worldly sensibility); as such it instigates a feeling of the future in the present: an experience of the future exercising its power in anticipation of its own actuality. Because this power remains that of potentiality—and indeed of an incredibly complex network of potentiality, a network inclusive of the potentiality of every datum comprising the universe’s current state—it can only be fixed or arrested probabilistically, though to be sure in a quite singular sense. The force of the future—the future force of every single datum informing the universe at a given moment—is felt in the present in a way that can only be represented probabilistically and where such
Our Predictive Condition
representation designates neither a purely abstract likelihood nor a statistical likelihood relative to a provisionally closed dataset, but a properly ontological likelihood: a propensity, which is to say, a likelihood that is, paradoxically, real. Indeed, Whitehead’s striking decision to include the total situation of the universe as determinative of each and every moment of its becoming lends depth to our conceptualization of propensity; specifically, it manages to capture the open-endedness of propensity: propensity names what in the present is always-already on the way toward its own future, but crucially on its way to a future that is itself not yet determined, that remains open to multiple possibilities. In this sense, the probabilistic dimension of Whitehead’s concept of “real potentiality” differs starkly not simply from the a priori calculus of probability but from all empirical probabilistic systems: because of its grounding in the total situation of the universe’s becoming, it is resolutely speculative in Whitehead’s understanding of the term.33 It is precisely because of its speculative status that Whitehead’s conception of potentiality furnishes the ontological basis of prediction. Insofar as it designates the future propensity of the present, real potentiality grounds the power of probability that informs the operation of today’s predictive industries and that lends a certain credibility to cultural fantasies of control over the future. This is because, in Whitehead’s understanding, probabilities are expressions of real forces, of actual propensities rather than empty statistical likelihoods. Rather than predicting the likelihood of future events on the basis of present and past data that are effectively inert, Whitehead’s account foregrounds the emergence of the future—and specifically of novelty in the future—on the basis of the real potentiality of the settled world at each moment of its becoming. If the future is felt in the present, that is precisely because the future literally is (or will be) produced from out of the real potentiality—on the basis of the superjectal intensity—of the present settled world in all of its micrological detail. The key point is that the connection between future and present proceeds by way of efficacy, or better, propensity, and not of prediction. The connection is real and not just statistical, or, in Whitehead’s terms, actual without being actualized.
Mark B. N. Hansen
That superjective intensity explains the power of the future in the present becomes clear in Whitehead’s description of the Eighth Category in Process and Reality: (viii) The Category of Subjective Intensity. The subjective aim, whereby there is origination of conceptual feeling, is at intensity of feeling (∂) in the immediate subject, and (ß) in the relevant future. This double aim—at the immediate present and the relevant future—is less divided than appears on the surface. For the determination of the relevant future, and the anticipatory feeling respecting provision for its grade of intensity, are elements affecting the immediate complex of feeling. The greater part of morality hinges on the determination of relevance in the future. The relevant future consists of those elements in the anticipated future which are felt with effective intensity by the present subject by reason of the real potentiality for them to be derived from itself.34 To understand the full force of this claim, and specifically its promise to open a novel perspective concerning prediction, let us introduce philosopher Judith Jones’s account of how intensity generated by data in the present is itself the source for superjectal subjectivity.35 For Jones, the subject referenced in the final line of the previous citation simply is the agency of the contrast yielding intensity: “The agency of contrast is the subject, the subject is the agency of contrast. To be a subject is to be a provoked instance of the agency of contrast, and that is all it is.”36 This important interpretation underscores how the real potentiality of the future is already felt as intensity in the present—is felt, that is, prior to its actualization and in its full force of potentiality: this feeling of potentiality for the future generates—indeed, simply is—the subject. Jones’s point here is crucial: subjectivity, insofar as it is the intensity produced by contrasts of settled data, simply is a distillation of real potentiality for the future that is felt in the present. As such, subjectivity cannot be restricted to the status of inert force in the present, but literally upsurges in and as the transition from present to future: by effectively introjecting the future—the force
Our Predictive Condition
of historically achieved potentiality—into the present, subjectivity arises in the in-between-present-and-f uture. It is the force that makes the future arise continuously out of the present, as possibility already (partially) contained in real potentiality. This, indeed, is the deep meaning of Whitehead’s assertion that by “subject” he always means “subject-superject”: subjectivity always arises on the basis of the power of the settled world, from “real potentiality”— which is to say, from the superjective forces of present, the constraint the present exercises over the future. For Whitehead then, the future is already in the present, not simply as a statistical likelihood, however reliable, but because each new concrescence is catalyzed into becoming by the superjectal intensity or real potentiality—the future agency—of the universe itself!
Recording the Future? In sketching a broad shift from preemption through fear to preemption through prediction, Brennan’s testimony addresses much more than the issue of drone killings. Indeed, as I have sought to suggest, it effectively foregrounds the sweeping intrusion of predictive analytics into the daily life of ordinary citizens that comprises the mandate of the November 2011 Department of Justice white paper. To begin to grapple with this intrusion—and with how Whitehead’s ontology of probability might help us understand it better—let us focus on an example of predictive technics—a “third-generation” search technology that promises, quite literally, to record the future. Recorded Future is a small, Swedish intelligence company that sells a data analytics service for predicting future events. Initially financed by small venture capital grants from the CIA and from Google, Recorded Future has developed algorithms that make predictions about future events entirely based on publicly available information, including news articles, financial reports, blogs, RSS feeds, Facebook posts, and tweets. Recorded Future has a client base that includes banks, government agencies, and hedge funds. What it offers is a service designed to monitor the likelihood of future events or, as the company’s press puts it, a “new tool that allows you to visualize the future.”37 What most distinguishes Recorded Future is its status as a
Mark B. N. Hansen
“third-generation” search engine. Rather than looking at individual pages in isolation, as did first-generation engines like Lycos and Alta Vista, and rather than analyzing the explicit links between Web pages with the aim of promoting those with the most links, as Google has done since the introduction of its PageRank algorithm in 1998, Recorded Future examines implicit links. Implicit links, or what it calls “invisible links” between documents, are links that obtain not because of any direct connection between documents but because they refer to the same entities or event. To access the power of implicit links between documents, Recorded Future does not simply use metadata embedded into documents, but actually separates the content contained in documents from what they are about; Recorded Future’s algorithms are able to identify in the documents themselves references to events and entities that exist outside of them, and on the basis of such identification, to create an entirely new network of affiliations that establish relations of meaning and knowledge between documents rather than mere associations.38 What is most crucial here is what Recorded Future does with the references it identifies, how it manages to construct those shadow references into a meaningful knowledge network with predictive power. To do this, Recorded Future ranks the entities and events identified by its algorithms based on a myriad of factors, the most important of which include the number of references to them, the credibility of the documents referring to them, and the occurrence of different entities and events within the same document. The result of this analysis is a “momentum score” that, combined with a “sentiment valuation,” indexes the power of the event or entity with respect to its potential future impact. For example, as journalist Tom Cheshire notes, “Searching big pharma in general will tell you that over the next five years, nine of the world’s fifteen best-selling medicines will lose patent protection”; the basis for this knowledge, which of course is only a heavily weighted prediction, is the high momentum score of the event, a score due to its being supported by thirteen news stories from twelve different sources.39 We can perhaps best appreciate the substantial predictive power of Recorded Future by focusing on another, equally crucial feature: its temporal dynamics. Recorded Future includes a time and space
Our Predictive Condition
dimension of documents in its evaluation, which allows it to score events and entities that are yet to happen on the basis of present knowledge about them—what, in Whitehead’s terms, we would call data of the settled universe. “References to when and where an event . . . will take place” are crucial, observes Staffan Truvé, one of Recorded Future’s cofounders, “since many documents actually refer to events expected to take place in the future.”40 By using RSS feeds, Recorded Future is able to integrate publishing time as an index for this temporal analysis. Such temporal analysis affords Recorded Future the capacity to weight opinions about the likely happening and timing of future events using algorithmically processed crowdsourcing and statistical analysis of historical records of related series of such events. The result: differentially weighted predictions about the future.41
The Power of the Future in the Present What accounts for Recorded Future’s specificity as a third- generation search engine—its focus on data that implicate the future in the present—is precisely what constitutes the potential pharmacological recompense of today’s predictive technologies: the open-endedness of data’s potentiality. Whatever power it is that allows Recorded Future to make reliable predictions of future developments is a power that is not specific to it and that is not created by its algorithms. Rather, it is a general power—the power of the future in the present—that operates at all levels of the universe’s continual becoming. It is, in short, an ontological power—the very ontological power Whitehead seeks to explain—and as such it is rooted in the total situation of causal efficacy that is captured with such precision (though of course, only partially) by today’s technical data gathering and predictive analytical systems. The crucial point here is that this causal efficacy, despite its immense complexity (remember it encompasses the superjectivity of every datum of the current world), is both “neutral” regarding its future use and always excessive in relation to any targeted deployment of it. This means, to put it slightly differently, that surrounding any delimited predictive system is a larger field of data—what I elsewhere call a “surplus of sensibility”—that, viewed speculatively,
Mark B. N. Hansen
indexes the causal efficacy of the total situation within which this delimited system operates.42 (Effectively, the latter gains its relia bility from closing off this larger surplus of sensibility, thereby transforming an always excessive propensity into a [provisionally] closed dataset.) Because it affords data that exceed whatever any given predictive system might include, the data of the world’s causal efficacy—the data constituting its real potentiality—always and in principle facilitates knowledge that cannot be restricted to any particular agenda. In this sense, reclaiming the surplus of sensibility from today’s data industries—liberating it from capture in concrete networks of predictive power—comprises the first task of a pharmacology of media that would restore data’s potential to offer broad insight about future tendencies that implicate humans;43 to the extent that it potentially counteracts the control instituted through provisionally closed predictive systems, such insight would constitute a recompense that lies at the very heart of our contemporary predictive condition. This pharmacological recompense, let me emphasize, is an intrinsic, structural element of contemporary technical mediations of the future: any system for data gathering and predictive analytics—because it operates on a “total situation” that it cannot hope to encompass in its entirety and that it can only speculatively intend—only ever actualizes a small part of a potentiality that continues to remain potential despite this actualization, that continues to exert its ontological power in the “environment” of this system. Whitehead’s ontology of real potentiality forms a kind of check against the imperative to close off this surplus—the very imperative driving our predictive culture—at the same time as it explains the very power that grounds prediction itself. Thus, as we seek to understand and to criticize the forms of predictive power that increasingly enframe—and constrain—our experience, it is imperative that we welcome—on this score, in concert with the very predictive industries that are at issue here—the technical interface to the data of sensibility making up the potential for our future experience. For it is only by recognizing the immense power of the data networks to which contemporary technologies afford access—and also by accepting the accompanying demotion of historically human modes of experience (sense perception, conscious
Our Predictive Condition
awareness, etc.)—that we can make good on Whitehead’s fundamental contribution toward theorizing our predictive condition. In this context, what the example of Recorded Future underscores is the very potential of Whitehead’s ontology of real potentiality: in Recorded Future’s weighting of present predictions concerning the future—but also, and more fundamentally, in the ontological source of these weighted predictions, the future’s status as “real component of what is actual”—we encounter the power of the future to shape the present, and with it, the power of prediction as more than a mere statistical entity.44 With third- generation search capacities, the mining and analysis of data takes a “Whiteheadian turn” in the sense that it ceases to ground the power of prediction in a recursive analysis of past behavior, and instead—taking full advantage of recently acquired technical capacities for text analysis—channels predictive power through the reference of present data to future entities and events. In this sense, we might say that Recorded Future—and the technical innovation it exploits—concretizes or instantiates Whitehead’s understanding of how the future is felt by the present, that is, by reference. Recorded Future indexes the fundamental insight of Whitehead’s ontology of potentiality: “Actual fact includes in its own constitution real potentiality which is referent beyond itself.”45 A Whiteheadian understanding of Recorded Future reveals a “positive” dimension of prediction: more than a mere extrapolation of the causal force of the present and the past to future possibility, prediction concerns the potentiality contained in the transition from present to future. The key point is that this potentiality, despite being imperfectly reliable as a ground for prediction, has ontological power: indeed, it is precisely this power that informs Whitehead’s specification of real potentiality as the mode through which the future is felt in the present. Whitehead’s contribution thus encompasses a critical and a constructive element, both of which are crucial to our efforts to understand and to live with our predictive condition. By furnishing a speculative account of the total situation informing the genesis of every new actuality, Whitehead’s account in effect foregrounds the impossibility for any empirical analytic system—no matter how computationally sophisticated and how
Mark B. N. Hansen
much data it can process—to grapple with the entirety of real potentiality, or anything close to it. Rather, systems like Recorded Future can—and no doubt will—get more reliable by including more data, but their reliability will always be purchased at the cost of inclusiveness: reliability, in short, is a function of the capacity to close off some data from the larger universe of data surrounding— and complicating—it. Accordingly, there will always be a surplus of data that remain available for the future in the mode of potentiality. In this sense, Whitehead’s speculative account serves as a critical check on the totalizing impulses of today’s data industries, a guarantee of sorts that the future, insofar as it can be felt in the present, can never be fully known in advance. By facilitating a model of technical distribution of sensibility rooted in an expansion of perception beyond consciousness and bodily self-perception, Whitehead’s philosophy makes room for the technical innovation at the heart of Recorded Future—the capacity to search the present for predictions about the future—to impact human experience in ways that go beyond the narrow and largely instrumental purposes that inform governmental and corporate deployments of it. The capacity to predict future events by way of present reference introduces a means to access more data that is relevant to human behavior—but that remains inaccessible through human modes of perception; as such, it makes more data available for the shaping of human behavior in the future. Isn’t this twofold investment in the power of potentiality precisely the source for the appeal of Person of Interest, in the sense that it features superhero-like characters who have imperfect knowledge of the predictions of an all-knowing but fully mysterious “machine” and who must become involved in situations—and must embrace the uncertainties of acting—if they are to prevent predicted future murders? With its obsessive concern for the imperfections of predictive knowledge, Person of Interest dramatizes both the negative and the positive elements just described. Its plots develop from the tension between the machine’s knowledge and the characters’ need to become involved to discover, always through a gradual and circuitous process, how the predictive information (a social security number) relates to events that are in the process of developing. In this sense, the information the machine gives is not
Our Predictive Condition
a “premediation” of a future scenario or event, but the final output of a complex and mysterious process of predictive analysis that does not forecast a preordained event with any degree of certainty but that operates incrementally on data relevant to broader situations in which events might come to occur. The show’s ideological work is focused less on gaining public support for military surveillance than it is on acclimating us to our predictive condition, and it performs this work precisely by insisting on the relevance, indeed on the centrality, of human action and decision making in the midst of situations in which human agents would seem to have no cognitive overview or mastery whatsoever. If the show depicts prediction as intrinsically partial, it does so in relation less to the concrete limitations of any finite predictive system than to pragmatic concerns that explain the poverty of the machine’s output (e.g., the need to keep the “back door” from being discovered) and at the same time serve to guarantee the continued relevance, indeed centrality, of human actors. The show would seem to assure us, at the very moment when a military predictive engine has acquired apparently total knowledge of the present, that real life will continue to require our distinctly human modes of deliberation and agency. With this assurance, we find ourselves more willing than before to allow today’s predictive technologies to operate as agents in an expanded grasp of the world’s sensibility: as long as they don’t threaten our relevance, these technologies can be invested with the power to ameliorate human existence. We can now see why Person of Interest provides a counterpoint to Minority Report. Whereas the latter focuses on a magical technology for recording the near-future before it happens, the former focuses on the new model of distributed technical agency that, I suggest, has increasingly become the reality of our predictive condition.
From Premediation to Prediction To conclude my exploration of the ontology of prediction, let me simply introduce Person of Interest into Grusin’s threefold account of temporality in Minority Report. Grusin’s characterization of the three regimes of temporality takes shape in relation to his mistaken characterization of prediction as “a future determined by the
Mark B. N. Hansen
sequence of past events.” Grusin deploys the “idea of prediction” to discount two forms of predictive temporality: one in which the future is simply added onto the present and past in accordance with the operation of “rules, laws, and habitual behaviors”; another in which this same schematization of time is undercut by an ineliminable margin of indetermination, the “free choice” available to individuals “at every moment.” In their place, Grusin champions a regime of premediated temporality: aligned with the fictional perspective of Agatha, the precog who provides the minority report, this account invests in the “virtuality of premediation,” “the idea that there are multiple potential futures, and that these future events always and already impinge upon the present.”46 Grusin telescopes the key distinction between prediction and premediation through the contrast between “majority” and “minority” positions as they are represented in the film: in the precrime version of things, “the future seen by the precogs determines the present in the same way that the past would”; in Agatha’s version, on the other hand, “the weight or force of these futures impact the present but do not determine it.”47 I would concur with Grusin as far as this contrast goes: clearly the film is an allegory of the power of the future to impact the present. But I would strongly resist Grusin’s dismissal of prediction as either incompatible with or irrelevant to this indeterminate mode of impact. Indeed, I suggest that prediction, once liberated from its orientation toward static and inert past data, is in fact necessary to make the “virtualities” of premediation “real.” The passage from which I excerpt these key words makes clear just how much Grusin’s argument depends on his dismissal of prediction: Minority Report exemplifies that to see premediation as the remediation of virtuality or potentiality is to recognize that there are always multiple competing and incomplete reals— multiple actualities which can emerge from any potential present, but which emerge not by negation or addition but by differentiation and divergence from other potential but never realized actualities. What is key here is that these virtualities are real, these premediations as virtualities have a
Our Predictive Condition
reality in the present, a force in the present, no matter how the future might turn out. That is, the model of possibility or prediction in scenarios, game-planning, or simulation ultimately involves the creation or determination of distinctions between false or illusory possibilities on the one hand and the real or the actual on the other—only those possible scenarios that come true are real, while the others are proved false or illusory or wrong. To think of premediation as virtual, and therefore as real, is to refuse this metaphysical distinction and to insist instead on the efficacy, or force, of the multiplicity of premediations in and of themselves—no matter how the future might actually turn out.48 Where Grusin goes wrong is in his characterization of prediction as bound to a repertoire of predetermined possibilities. However, before we take stock of the significance of this fundamental mischaracterization—and before we foreground the necessity for a different model of prediction here—let us follow out the logic of what Grusin does claim. If, that is, prediction can do no more than present a choice between false and real possibilities, then it is left to premediation itself—premediation as the presentation of a host of virtual futures—to explain how the future arises from the present. This seems to be precisely the position that Grusin adopts—or, as I shall claim, is compelled to adopt—at the end of this passage (which concludes his chapter on “Premediation”): in contrast and as an alternative to inert predictions that carve up the future as a repertoire of extensions of the past, premediations engage the future not as a single, predetermined outcome that can be known from the position of the present, but as a virtuality that can encompass different present projections, different premediations, all of which (allegedly) engage the reality of the future in the present. The ultimate culmination of this logic—one which Grusin cannot himself resist—involves the transformation of causality from the real to premediation—the substitution, for the causal efficacy of the world, of the “efficacy, or force, of the multiplicity of premediations in and of themselves—no matter how the future might actually turn out.” Against this conclusion and the logic it culminates, let us highlight the error on which it relies: by attributing causal force to the
Mark B. N. Hansen
premediations themselves—which are, after all, representations of the future in the form of media events—Grusin mistakes allegory for the real that it allegorizes. Even though he is right to insist on the virtuality—or, as I prefer to call it, the “real potentiality”—of the future, Grusin’s decision to channel its causal force through the form of the premediated event not only imposes a particular, and in this case particularly limiting, unit (the integral event) on the causal force of the real. But it also, as a consequence of this imposition, leads him to overlook the distinction between this causal force and its expression. That is precisely why Grusin finds himself compelled to champion premediation itself as cause—or more precisely, to recall Massumi’s argument, as virtual quasi-cause— though as cause not of the actual future (since it doesn’t matter how the “future might actually turn out”), but rather of a host of premediations of the future, which is to say, of nothing other than premediation itself. With this development, we are, in effect, returned to the solipsism of Baudrillardian simulation: for what are premediations if not the circulation of representations in the place of an absent or unknowable real? If, by contrast, we retain the distinction between the future- implicating causal efficacy of the real and the premediation of how that efficacy might produce the future, we will be able to see premediation for what it is—a representation or allegory of the future that abstracts from the actual causal efficacy Whitehead locates in the world in order to produce an immunologic designed to ward off the possibility of the unexpected. Once restored to its representational status, premediation—far from providing a causal explanation of the future—cannot help but beg the question of what grants it causal force. And the answer, as I hope my discussion here has made clear, can only be the causal efficacy (or real potentiality) of the world itself that can never be known in its entirety, but that can be partially, if imperfectly, predicted and represented as discrete probabilities in the wild. To the extent that it allegorizes prediction as the future’s inherence in the present, Person of Interest introduces what we can only consider to be a fourth regime of temporality that extends or supplements Grusin’s threefold account. As the force at the basis of this allegory, prediction on the basis of real potentiality or
Our Predictive Condition
“prediction in the wild” seeks to grasp the propensity that carries the present world into the future. The access that large-scale data mining and predictive analytics gives to this propensity is precisely what allows prediction in the wild to “premediate” the future, not as a set of represented media events, but as a partial glimpse into the present operation of real forces that will produce—that are already producing—the future to come.
Notes 1. Julia Angwin, “U.S. Terrorism Agency to Tap a Vast Database of Citizens,” Wall Street Journal, December 13, 2012, http://online.wsj.com/. Holder’s decision appears in the unclassified document, “Guidelines for Retention, Use, and Dissemination by the National Counterterrorism Center and Other Agencies of Information in Datasets Containing Non- Terrorism Information” (hereafter NCTC Guidelines 2012), which is available as a sidebar in Angwin’s article. 2. Jesselyn Raddack, “Minority Report: Govt Can Now Spy on Innocent Americans for Future Criminal Behavior,” Daily Kos blog, December 13, 2012, www.dailykos.com/. 3. John Brennan, cited in “Innocent Until Proven Guilty; Imminent Until Proven—Too Late,” Empty Wheel blog, February 11, 2013, www .emptywheel.net/. The position outlined here marks a distinct radicalization of the policy set forth in the November 2011 Department of Justice white paper. (The white paper is available at www.documentcloud.org/ documents/602342-d raft-white-paper.html.) Whereas the white paper implies (although without actually requiring) that, in the case of Ameri can citizens, involvement in past crimes is important for assessing the imminence of a threat, Brennan’s rationale preserves no such implication, and indeed presents the latter not simply as a key point of difference between the courts and the military, but as a part of the threat itself! The rationale here is quite simple: if the military were constrained to exercise its force, its right to kill, only in cases where evidence of involvement in past crimes existed, it would very possibly, indeed almost certainly, miss crucial opportunities to save American lives. 4. Empty Wheel blog, “Innocent until Proven Guilty.” 5. Brian Massumi, “Potential Politics and the Primacy of Preemption,” Theory & Event 10, no. 2 (2007): para. 23, http://muse.jhu.edu/ journals/theory_and_event/. 6. Brian Massumi, “Fear (The Spectrum Said),” Positions 13, no. 1 (2005): 35.
Mark B. N. Hansen
7. Massumi, “Potential Politics,” para. 13, emphasis added. 8. Massumi speaks, in addition to the “unknowable,” of “indeterminacy,” “indeterminate potentiality,” and “objective uncertainty,” “Potential Politics,” paras. 20, 13, 24. 9. And Massumi is unequivocal on this point: “The lack of knowledge about the nature of the threat can never be overcome,” “Potential Politics,” para. 13. 10. Massumi, “Potential Politics,” para. 13; and Massumi, “Fear,” 35. 11. Massumi, “Potential Politics,” para. 13. 12. Massumi, “Fear,” 36. 13. Massumi, “Potential Politics,” para. 23. 14. Massumi notes the Bush administration’s disdain for “reality”: “Truth, in this new world order, is by nature retroactive. . . . The reality- based community wastes time studying empirical reality, the Bushites said: ‘we create it.’ And because of that, ‘we’ the preemptors will always be right. We always will have been right to preempt, because we have objectively produced a recursive truth-effect for your judicious study. And while you are looking back studying the truth of it, we will have acted with reflex speed again, effecting a new reality” (“Potential Politics,” para. 20). 15. Oscar Gandy Jr., “Data Mining, Surveillance, and Discrimination in the Post-9/11 Environment,” in The New Politics of Surveillance and Visibility, eds. K. Haggerty and R. Ericson (Toronto: University of Toronto Press, 2007), 369–70. 16. Alfred North Whitehead, Process and Reality: An Essay in Cosmology, corrected ed. (New York: Free Press, 1979), 66. 17. One key element that the executive-military context adds to this general pattern of prospective data mining is a temporal instrumentality: it renders probability a function of time. Rather than existing in relation to a fantasized total knowability, and conversely, an equally fantasized total unknowability, information is always a compromise, an arrest of an ongoing world (the process of what I’ve called “worldly sensibility”), that reflects the confluence of multiple, and, to some degree or other, incongruous factors. That such a pragmatic approach becomes all the more significant in the context of today’s terrorist threat is a point not lost on the Justice Department. Consider the following passage from its November 2011 white paper, which meditates on the key concept of “imminent threat”: “By its nature . . . the threat posed by al-Qa’ida and its associated forces demands a broader concept of imminence in judging when a person continually planning terror attacks presents an imminent threat, making the use of force appropriate. In this context, imminence must incorporate considerations of the relevant window of opportunity, the possibility of reducing collateral damage to civilians, and the likelihood of heading
Our Predictive Condition
off future disastrous attacks on Americans” (www.documentcloud.org/ documents/602342-draft-white-paper.html). Interestingly enough in the present context, the white paper goes on to correlate this broadened concept of imminence to the widespread use of technology, as if to position data-gathering and predictive analytics as a new theater, or rather a new “behind-the-scenes,” of war itself: “We are finding increasing recognition in the international community that a more flexible understanding of ‘imminence’ may be appropriate when dealing with terrorist groups, in part because threats posed by non-state actors do not present themselves in the ways that evidenced imminence in more traditional conflicts. After all, al-Qa’ida does not follow a traditional command structure, wear uniforms, carry its arms openly, or mass its troops at the borders of the nations it attacks. Nonetheless, it possesses the demonstrated capability to strike with little notice and cause signifi cant civilian or military casualties. Over time, an increasing number of our international counterterrorism partners have begun to recognize that the traditional conception of what constitutes an ‘imminent’ attack should be broadened in light of the modern-day capabilities, techniques, and technological innovations of terrorist organizations.” John Brennan, “Remarks of John O. Brennan, ‘Strengthening Our Security by Adhering to Our Values and Laws,’ ” address to Harvard Law School, September 16, 2011, www.whitehouse.gov/the-press-office/. 18. Richard Grusin, Premediation: Affect and Mediality after 9/11 (Basingstoke, U.K.: Palgrave Macmillan, 2010), 39. 19. Ibid. 20. To underscore this connection, let me cite the entire passage from which I excerpt this phrase: “Premediation insists that the future itself is also already remediated. With the right technologies—in this case the distributed cognition made possible by the hybridized institution of PreCrime, with its three precogs, nurtured in the appropriate physical environment and attached to the correct hardware and software—the future can be remediated before it happens. This remediation of the future is not only formal but also reformative. Insofar as capital crime can be prevented, precognition allows for the remedying of the future, the prevention of the crime of murder through premediation” (Grusin, Premedia tion, 39). 21. Ibid., 46, emphasis added. 22. In my book, Feed-Forward: On the “Future” of Twenty-First-Century Media (Chicago: University of Chicago Press, 2014). 23. To his credit, Grusin appears to grasp how the operation of premediation he describes ultimately forms nothing more nor less than a component in a larger system whose aim is to stimulate the production
Mark B. N. Hansen
of ever more data in the service of ever more effective prediction: “The affective life of media and the anticipatory gestures of mediaphilia operate to encourage, make possible, and proliferate an ongoing flow of everyday media transactions, which provide the raw material to be mined so that future, potentially disruptive events of terrorism or other violent attacks can be pre-empted before they ever happen” (Grusin, Premediation, 134). 24. Angwin, “U.S. Terrorism Agency.” 25. Hansen, Feed-Forward. My understanding of the pharmakon derives from Jacques Derrida’s reading in “Plato’s Pharmacy,” as well as Bernard Stiegler’s more recent developments of the concept. 26. A much misunderstood notion, in large part due to Whitehead’s own descriptions, the superject designates the mode of subjectivity that an actuality takes on when it is added to, that is, becomes part of, the objective, settled world. As such, it is as superject that a completed actuality is able to act, not on its own genesis or becoming, but on the becoming of other new actualities and societies of actualities. On this understanding, which incidentally owes much to the work of philosopher Judith Jones, the superject is the mode of subjectivity and the source of power of the “real potentiality” of the settled world, which is to say, of its power to impact the future. The crucial point here, following Jones’s identification of subjectivity with intensity, is how the real potentiality of the future is felt as intensity in the present, prior to its actualization and in its full force of superjective potentiality. 27. This is the central argument of Feed-Forward. 28. Éric Sadin, La Société de l’Anticipation (Paris: Éditions inculte, 2011), 13. Translations mine. 29. Jordan Crandall, “The Geospatialization of Calculative Operations: Tracking, Sensing and Megacities,” Theory, Culture & Society 27, no. 6 (November 2010): 75. 30. Laplace, cited in Jean-Rene Vernes, Critique de la raison aleatoire, ou, Decartes contre Kant (Paris: Aubier-Montaigne, 1982), 87. Translations mine. 31. Vernes, Critique, 88. 32. Karl Popper, “”Two New Views of Causality,” in A World of Propensities (Bristol, U.K.: Thoemmes Antiquarian Books, 1990), 9, last two emphases added. 33. Whitehead’s project in Process and Reality (and related texts) is to provide a speculative account of the universe that explains how it must be structured in order for experience to be what it is. One key point that often gets forgotten by Whitehead’s commentators is that the speculative is not accessible from the perspective of experience. 34. Whitehead, Process and Reality, 27, last emphasis added.
Our Predictive Condition
35. Jones’s important book, Intensity, is effectively a reading of Whitehead’s speculative empiricism from the perspective of the Eighth Cate gory. Judith A. Jones, Intensity: An Essay on Whiteheadian Ontology (Nashville, Tenn.: Vanderbilt University Press, 1998), especially chs. 1 and 3. 36. Jones, Intensity, 130. 37. Home page, Recorded Future, /www.recordedfuture.com/. 38. Journalist Tom Cheshire pinpoints the significance of this capacity for reference when he compares Recorded Future with Google: “Recorded Future knows who Nicolas Sarkozy is, say: that he’s the president of France, he’s the husband of Carla Bruni, he’s 1.65m tall in his socks, he travelled to Deauville for the G8 summit in May. If you Google ‘president of France,’ you’ll get two Wikipedia pages on ‘president of France’ then ‘Nicolas Sarkozy.’ Useful, but Google doesn’t know how the two, Sarkozy and the presidency, are actually related; it’s just searching for pages linking the terms.” Tom Cheshire, “The News Forecast: Can You Predict the Future by Mining Millions of Web Pages for Data?,” Wired UK, November 10, 2011, www.wired.co.uk/, emphases added. 39. Ibid. 40. Ibid. 41. Despite its superficial similarity to Bernard Stiegler’s account of how today’s media industries support empty protentions, the predictive mechanism at issue in Recorded Future opens to a future that is not simply a function of expectations rooted in past experiences. For Stiegler, there can be no viable future because industrially manufactured memories have taken the place of “lived” secondary memories, and thus provide a false or empty source for projections of future possibility. Whereas Stiegler’s model operates in relation to a static source of fixed possibilities, a situation reinforced by his discretization of memory and the past as tertiary—that is, recorded and inert—contents of experience, Recorded Future operates in terms of probabilities that are generated not simply through a processing of the repository of past, inert data of experience, but—crucially—through the power of present data to lay claim on the future. In this sense, it invests in the future as open to possibility, even if it seeks to control how the future will be produced. 42. Hansen, Feed-Forward. 43. I develop such a pharmacological account of the “surplus of sensibility” in Feed-Forward. 44. What ensures that potentiality implicates the future in the present is the solidarity that Whitehead attributes to the extensive continuum: “The extensive continuum is ‘real,” he writes, “because it expresses a fact derived from the actual world and concerning the contemporary actual
Mark B. N. Hansen
world. All actual entities are related according to the determinations of this continuum; and all possible actual entities in the future must exemplify these determinations in their relations with the already actual world. The reality of the future is bound up with the reality of this continuum. It is the reality of what is potential, in its character of a real component of what is actual” (Whitehead, Process and Reality, 66). On this account, what implicates the future in the present is nothing less than the entirety of causal nexuses operative at any moment in the ongoing process of the universe, or more concretely, in any given settled state of the superjectal world: this is the wellspring of “real potentiality.” 45. Whitehead, Process and Reality, 72, emphasis added. 46. Grusin, Premediation, 59, 60. 47. Ibid., 60, emphasis added. 48. Ibid., 60–61.
9 6 0
Crisis, Crisis, Crisis; or, The Temporality of Networks Wendy Hui Kyong Chun
How are codes and safety related? How can we understand the current proliferation of codes designed to guarantee our safety and of crises that endanger it? Codes, historically linked to rules and laws, seek to exempt us from hurt or injury by establishing norms, which order the present and render calculable the future. As Adrian Mackenzie and Theo Vurdubakis note, “Code systems and codes of conduct pervade many registers of ‘safe living.’ . . . Many situations today become manageable or tractable by virtue of their codeability.”1 Although codes encompass more than software—they are also “cultural, moral, ethical”—computational codes are increasingly privileged as the means to guarantee “safe living” because they seem to enforce automatically what they prescribe. If “voluntary” actions once grounded certain norms, technically enforced settings and algorithms now do so, from software keys designed to prevent unauthorized copying to iPhone updates that disable unlocked phones, from GPS tracking devices for children to proxies used in China to restrict search engine results. Tellingly, trusted computer systems are systems secure from user interventions and understanding. Moreover, software codes not only save the future by restricting user action, they also do so by drawing on saved data and analysis. They are, after all, programmed. They thus seek to free us from danger by reducing the future to the past, or, more precisely, to a past anticipation of the future. Remarkably, though, computer systems have been linked to user empowerment and agency, as much as they have been condemned as new forms of control. Still more 9 139 0
Wendy Hui Kyong Chun
remarkably, software codes have not only reduced crises, they have also proliferated them. From financial crises linked to complex software programs to supercomputer-dependent diagnoses and predictions of global climate change, from undetected computer viruses to bombings at securitized airports, we are increasingly called on both to trust coded systems and to prepare for events that elude them. This chapter responds to this apparent paradox by arguing that crises are not accidental to a culture focused on safety—they are its raison d’être. In such a society, each crisis is the motor and the end of control systems; each initially singular emergency is carefully saved, analyzed, and codified. More profoundly and less obviously, crises and codes are complementary because they are both central to the emergence of what appears to be the antithesis of both automation and codes: user agency. Codes and crises together produce (the illusion of) mythical and mystical sovereign subjects who weld together norm with reality, word with action. Exceptional crises justify states of exception that undo the traditional democratic separation of executive and legislative branches.2 Correspondingly, as I’ve argued in my recent book, Programmed Visions: Software and Memory, software emerged as a thing—as an iterable textual program—through a process of commercialization and commodification that has made code logos: code as source, code as conflated with, and substituting for, action.3 This chapter revisits code as logos in order to outline the fundamental role crises play in new media networks. Starting from an analysis of rhetorical and theoretical constructions of the Internet as critical, it contends that crisis is new media’s critical difference: its norm and its exception. Crises cut through the continuous stream of information, differentiating the temporally valuable from the mundane, offering its users a taste of real-time responsibility and empowerment. They also threaten to undermine this experience, however, by catching and exhausting us in an endlessly repeating series of responses. Therefore, to battle this twinning of crisis and codes, we need a means to exhaust exhaustion, to recover the undead potential of our decisions and our information through a practice of constant care.
Crisis, Crisis, Crisis
Internet Critical The Internet, in many ways, has been theorized, sold, and sometimes experienced as a “critical” machine. In the mid-to late 1990s, when the Internet first emerged as a mass personalized medium through its privatization, both its detractors and supporters promoted it as a “turning point, an important or decisive state” in civilization, democracy, capitalism, and globalization.4 Bill Gates called the Internet a medium for “friction-free capitalism.”5 John Perry Barlow infamously declared cyberspace an ideal space outside physical coercion, writing, “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” We in cyberspace, he continues, are “creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”6 Blatantly disregarding then-current Internet demographics, corporations similarly touted the Internet as the great racial and global equalizer: MCI advertised the Internet as a race-free utopia; Cisco Systems similarly ran television advertisements featuring people from around the world, allegedly already online, who accosted the viewers with “Are you ready? We are.” The phrase “We are” made clear the threat behind these seeming celebrations: Get online because these people already are.7 The Internet was also framed as quite literally enabling the critical—understood as enlightened, rational debate—to emerge. Al Gore argued that the Global Information Structure finally realized the Athenian public sphere; the U.S. Supreme Court explained that the Internet proved the validity of the U.S. judicial concept of a marketplace of ideas.8 The Internet, that is, finally instantiated the Enlightenment and its critical dream by allowing us—as Immanuel Kant prescribed—to break free from tutelage and to express our ideas as writers before the scholarly world. Suddenly we could all
Wendy Hui Kyong Chun
be Martin Luthers or town criers, speaking the truth to power and proclaiming how not to be governed like that.9 It also remarkably instantiated critiques of this Enlightenment dream: many theorists portrayed it as Roland Barthes’s, Jacques Derrida’s, and Michel Foucault’s theories come true.10 The Internet was critical because it fulfilled various theoretical dreams. This rhetoric of the Internet as critical, which helped transform the Internet from a mainly academic and military communications network to a global medium, is still with us today, even though the daily experience of using the Internet has not lived up to the early hype. From so-called Twitter revolutions—a name that erases the specificity of local political issues in favor of an Internet application—to WikiLeaks’ steady flow of information to Facebook’s alleged role in the 2011 protests in Tunisia and Egypt, Internet technologies are still viewed as inherently linked to freedom. As the controversy over WikiLeaks makes clear, this criticality is also framed as a crisis, as calling the critical—and our safety/security— into crisis. This crisis is not new or belated: the first attempt by the U.S. government to regulate the content of the Internet coincided with its deregulation. The same U.S. government promoting the information superhighway also condemned it as threatening the sanctity and safety of the home by putting a porn shop in our children’s bedroom.11 Similarly, Godwin’s law that “as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1” was formulated in the 1990s.12 So, at the very same time as the Internet (as Usenet) was trumpeted as the ideal marketplace of ideas, it was also portrayed as degenerating public debate to a string of nasty accusations. Further, the same corporations celebrating the Internet as the great racial equalizer also funded roundtables on the digital divide.13 More recently, the Internet has been linked to cyberbullying and has been formulated as the exact opposite of Barlow’s dream: a nationalist machine that spreads rumors and lies. Joshua Kurlantzick, an adjunct fellow at the Pacific Council on International Policy in the United States, told the Korea Times in response to the 2008 South Korean beef protests, “The Internet has fostered the spread of nationalism because it allows people to pick up historical trends, and talk about them, with little verification.”14
Crisis, Crisis, Crisis
Likewise, critics have postulated the Internet as the end of criti cal theory, not because it literalizes critical theory, but rather because it makes criticism impossible. As theorists McKenzie Wark and Geert Lovink have argued insightfully, the sheer speed of telecommunications undermines the time needed for scholarly contemplation.15 Scholarship, Wark argues, “assumes a certain kind of time within which the scholarly enterprise can unfold,” a time denied by global media events that happen and disappear at the speed of light.16 Theory’s temporality is traditionally belated. Theory stems from the Greek theoria, a term that described a group of officials whose formal witnessing of an event ensured its official recognition. To follow Wark’s and Lovink’s logic, theory is impossible because we have no time to register events, and we lack a credible authority to legitimate the past as past. In response, Lovink has argued for a “running theory” and Wark has argued that theory itself must travel along the same vectors as the media event. I am, as I’ve stated elsewhere, sympathetic to these calls.17 However, I also think we need to theorize this narrative of theory in crisis, which resonates both with the general proliferation of crises discussed earlier and with much recent hand wringing over the alleged death of theory. Moreover, we need to theorize this narrative in relation to its corollary: an ever increasing desire for crises, or more properly for updates that demand response and yet to which it is impossible to respond completely, from ever-updating Twitter feeds to exploding inboxes. (That is, if, as Ursula Frohne theorized in response to the spread of webcams, that “to be is to be seen,” it would now seem that “to be is to be updated.” Automatically recognized changes of status have moved from surveillance to news and evidence of one’s ongoing existence.18) The lack of time to respond—brought about by the inhumanly clocked time of our computers, which render the new old and, as I contend later, the old new—coupled by the demand for response, I suggest, makes the Internet compelling. Crises structure new media temporality.
Crisis: New Media’s Critical Difference Crisis is new media’s critical difference. In new media, crisis has found its medium, and in crisis, new media has found its value—its
Wendy Hui Kyong Chun
punctuating device. Crises have been central to making the Internet a mass medium to end mass media: a personalized mass device. The aforementioned crises answered the early questions: Why go online? And how can the Internet—an asynchronous medium of communication—provide compelling events for users? Further, crises are central to experiences of new media agency, to information as power: crises—moments that demand real-time response— make new media valuable and empowering by tying certain information to a decision, personal or political (in this sense, new media also personalizes crises). Crises mark the difference between “using” and other modes of media spectatorship or viewing, in particular “watching” television, which has been theorized in terms of liveness and catastrophe. Comprehending the difference between new media crises and televisual catastrophes is central to understanding the promise and threat of new media. Television has most frequently been theorized in terms of liveness: a continuous flowing connection. As Jane Feuer has argued influentially, regardless of the fact that much television programming is taped, television is promoted as essentially live, as offering a direct connection to an unfolding reality “out there.”19 As Mary Ann Doane has further developed in her canonical “Information, Crisis, Catastrophe,” this feeling of direct connection is greatly enhanced in moments of catastrophe: during them, we stop simply watching the steady flow of information on television set and sit, transfixed, before it. Distinguishing between television’s three different modes of apprehending the event—information (the steady stream of regular news), crisis (a condensation of time that demands a decision: for this reason it is usually intertwined with political events), and catastrophe (immediate “subjectless” events about death and the failure of technology), Doane argues that commercial television privileges catastrophe because catastrophe “corroborates television’s access to the momentary, the discontinuous, the real.”20 Catastrophe, that is, underscores television’s greatest technological power: “its ability to be there—both on the scene and in your living room. . . . The death associated with catastrophe ensures that television is felt as an immediate collision with the real in all its intractability—bodies in crisis, technology gone awry.” Rather than a series of decisions (or significations), televisual catastrophe
Crisis, Crisis, Crisis
presents us with a series of events that promise reference: a possibility of touching of the real. However, as in Feuer’s critique of liveness, Doane points out that television’s relation to catastrophe is ideological rather than essential. Televisual catastrophe is central to commercial television programming because it makes television programming and the necessary selling of viewers’ time seem accidental, rather than central, to televisual time. “Catastrophe,” she writes, “produces the illusion that the spectator is in direct contact with the anchorperson, who interrupts regular programming to demonstrate that it can indeed be done when the referent is at stake.” Thus television renders economic crises, which threaten to reveal the capitalist structure central to commercial television’s survival, into catastrophes: apolitical events that simply happen. Televisual catastrophe is thus “characterized by everything which it is said not to be—it is expected, predictable, its presence crucial to television’s operation. . . . Catastrophe functions as both the exception and the norm of a television practice which continually holds out to its spectator the lure of a referentiality perpetually deferred.”21 In contrast, new media is a crisis machine: the difference between empowered user and the couch potato, the difference between crisis and catastrophe. From the endless text messages that have replaced the simple act of making a dinner date to the familiar genre of “email forwarding accidents,” crises promise to move us from the banal to the crucial by offering the experience of something like responsibility, something like the consequences and joys of “being in touch.” Crisis promises to take us out of normal time, not by referencing the real, but rather by indexing real time, by touching a time that touches a real, different time: a time of real decision, a time of our lives. It touches duration; it compresses time. It points to a time that seems to prove that our machines are interruptible, that programs always run short of the programmability they threaten. Further, crises, like televisual catastrophes, punctuate the continuous stream of information, so that some information, however briefly, becomes (in)valuable. This value is not necessarily inherent to the material itself—this information could at other moments be incidental and is generally far less important than the contents of the New York Times. Their value stems from
Wendy Hui Kyong Chun
their relevance to an ongoing decision, to a sense of computers as facilitating “real-time” action. Real time has been central to the makeover of computers from work devices to media machines that cut across work and leisure. Real-time operating systems transform the computer from a pre- programmed machine run by human operators in batch mode to “alive” personal machines that respond to users’ commands. Real- time content, stock quotes, breaking news, and streaming video similarly transform personal computers into personal media machines. What is real is what unfolds in real time.22 If before visual indexicality guaranteed authenticity (a photograph was real because it indexed something out there), now real time does so, for real time points elsewhere—to “real-world” events, to the user’s captured actions. That is, real time introduces indexicality to this non-indexical medium, an indexicality felt most acutely in moments of crisis, which enable connection and demand response. Crises amplify what Tara McPherson has called “volitional mobility”: dynamic changes to web pages in real time, seemingly at the bequest of the user’s desires or inputs, that create a sense of liveness on demand. Volitional mobility, like televisual liveness, produces a continuity, a fluid path over discontinuity.23 It is a simu lated mobility that expands to fill all time, but, at the same time, promises that we are not wasting time, that indeed, through real time, we touch real time. The decisions we make, however, seem to prolong crises rather than end them, trapping us in a never-advancing present. Consider, for instance, “viral” email warnings about viruses. Years after computer security programs had effectively inoculated systems against a 2005 Trojan attached to a message claiming that Osama bin Laden had been captured, messages about the virus—many of which exaggerated its power—still circulated.24 These messages spread more effectively than the viruses they warn of: out of good will, we disseminate these warnings to our address book, and then forward warnings and about these warnings, and so on, and so on. (Early on, trolls took advantage of this temporality, with their initial volleys unleashing a firestorm of warnings against feeding the troll.) These messages, in other words, act as “retroviruses.” Retroviruses, such as HIV, are composed of RNA strands that use
Crisis, Crisis, Crisis
the cell’s copying mechanisms to insert DNA versions of themselves into a cell’s genome. Similarly, these fleeting messages survive by our copying and saving them, by our active incorporation of them into our ever-repeating archive. Through our efforts to foster safety, we spread retrovirally and defeat our computers’ usual anti viral systems. This voluntary yet never-ending spread of information seemingly belies the myth of the Internet as a “small world.” As computer scientists D. Liben-Nowell and J. Kleinberg in their analysis of the spread of chain letters have shown, the spread of chain letters resembles a long thin tree rather than a short fat one.25 This diagram seems counterintuitive: if everyone on the Internet was really within six degrees of each other, information on the Internet should spread quickly and then die. Nowell and Kleinberg pinpoint asynchrony and replying preferences as the cause: because everyone does not forward the same message at once or to the same number of people, messages circulate at different paces and never seem to reach an end. This temporality—this long thin chain of transmission—seems to describe more than just the spread of chain letters. Consider, for instance, the ways in which a simple search can lead to semi-volitional wandering: hours of tangential surfing. Microsoft has playfully called this temporality “search engine overload syndrome” in its initial advertisement for its “decision engine,” Bing. In these commercials, characters respond to a simple comment such as “We really need to find a new place to go for breakfast,” with a long stream of unproductive associations, such as details about the movie The Breakfast Club (1985). These characters are unable to respond to a question—to make a decision—because each word unleashes a long thin chain of references due to the inscription of information into “memory.” This repetition of stored information reveals that the value of information no longer coincides with its initial “discovery.” If once Walter Benjamin, comparing the time of the story and news, could declare, “The value of information does not survive the moment in which it was new. It lives only at that moment; it has to surrender to it completely and explain itself to it without losing any time,” now, newness alone does not determine value.26 Currently, news organizations charge for old information. The New York Times,
Wendy Hui Kyong Chun
for example, charges online for its archive rather than its current news; similarly, popular radio shows such as This American Life offer only this week’s podcast for free. We pay for information we miss (if we do), either because we want to see it again or because we missed it the first time, our missing registered by the many references to it. (Consider, in this light, all the YouTube videos referencing Two Girls, One Cup after that video was removed.) Repetition produces value, and memory, which once promised to save us from time, makes us out of time by making us respond continually to information we have already responded to, to things that will not disappear. As the Bing commercials reveal, the sheer amount of saved information seems to defer the future it once promised. Memory, which was initially posited as a way to save us by catching what we lose in real time—by making the ephemeral endure and by fulfilling that impossible promise of continuous history to catch everything into the present—threatens our sanity, that is, only if we expect engines and information to make our decisions for us, only if we expect our programs to (dis)solve our crises. Bing’s solution—the exhausting of decisions altogether through a “decision engine” (which resonates with calls for states of emergency to exhaust crises)—after all, is hardly empowering. Bing’s promised automation, however, does perhaps inadvertently reveal that, if real-time new media do enable user agency, they do so in ways that mimic, rather than belie, automation and machines. Machinic real time and crises are both decision-making processes. According to the Oxford English Dictionary (OED), real time is “the actual time during which a process or event occurs, especially one analyzed by a computer, in contrast to time subsequent to it when computer processing may be done, a recording replayed, or the like.” Crucially, hard and soft real-time systems are subject to a “real-time constraint.” That is, they need to respond, in a forced duration, to actions predefined as events. The measure of real time, in computer systems, is its reaction to the live, its liveness—its quick acknowledgment of and response to our action. They are “feedback machines” based on control mechanisms that automate decision making. As the definition of real time makes clear, real time refers to the time of computer processing, not to the user’s time. Real time is never real time—it is deferred and mediated. The emphasis
Crisis, Crisis, Crisis
on crisis in terms of user agency can thus be seen as a screen for the ever-increasing automation of our decisions. While users struggle to respond to “What’s on your mind?” their machines quietly disseminate their activity. What we experience is arguably not a real decision, but rather one already decided in an unforeseen manner: increasingly, that is, our decisions are like actions in a video game. They are immediately felt, affective, and based on our actions, and yet at the same time programmed. Furthermore, crises do not arguably interrupt programming, for crises—exceptions that demand a suspension, or at the very least an interruption of rules or the creation of new norms—are intriguingly linked to technical codes or programs.
Logos as State of Exception Importantly, crises—and the decisions they demand—do not simply lead to the experience of responsibility; as the phrase “panic button” nicely highlights, they also induce moments of fear and terror from which we want to be saved via corporate, governmental, or technological intermediaries. States of exception are now common reactions to events that call for extraordinary responses, to moments of undecidability. As Derrida has argued, the undecidable calls for response that “though foreign and heterogeneous to the order of the calculable and the rule, must . . . nonetheless . . . deliver itself over to the impossible decision while taking account of law and rules.”27 States of emergency respond to the undecidable by closing the gap between rules and decision through the construction of a sovereign subject who knits together force and law (or, more properly, force and suspended law); this sovereign subject through his actions makes the spirit of the law live. Although these states would seem to be the opposite of codes and programs, I link them together—and to the experience of crises discussed earlier— through questions of agency or, more properly, as I explain later, authority. Giorgio Agamben has most influentially theorized states of exception. He notes that one of the essential characteristics of the state of exception is “the provisional abolition of the distinction among legislative, executive, and judicial powers.”28 This provisional
Wendy Hui Kyong Chun
granting of “full powers” to the executive suspends a norm such as the constitution in order to better apply it. The state of exception is the opening of a space in which application and norm reveal their separation and a pure force-of-law realizes (that is, applies by ceasing to apply . . .) a norm whose application has been suspended. In this way, the impossible task of welding norm and reality together, and thereby constituting the normal sphere, is carried out in the form of the exception, that is to say, by presupposing their nexus. This means that in order to apply a norm it is ultimately necessary to suspend its application, to produce an exception. In every case, the state of exception marks a threshold at which logic and praxis blur with each other and a pure violence without logos claims to realize an enunciation without any real reference.29 The state of exception thus reveals that norm and reality are usually separate—it responds to the moment of their greatest separation. In order to bring them together, force without law/logos—a living sovereign—authorizes a norm “without any reference to real ity.”30 It is a moment of pure violence without logos. That is, if the relationship between law and justice—a judicial decision—usually refers to an actual case (it is an instance of parole, an actual speaking), a state of exception is langue in its pure state: language in the abstract and at its most mystical. Given this, states of exception would seem the opposite of programming. Programs do not suspend anything, but rather ensure the banal running of something “in memory.” Programs reduce the living world to dead writing; they condense everything to “source code” written in advance, hence the adjective source. This privileging of code is evident in common sense to theoretical understandings of programming, from claims made by free software advocates that free source code is freedom to those made by new media theorists that new media studies is, or should be, software studies. Programmers, computer scientists, and critical theorists have all reduced software—once evocatively described by historian Michael Mahoney as “elusively intangible, the behavior of the
Crisis, Crisis, Crisis
machines when running” and described by theorist Adrian Mac kenzie as a “neighbourhood of relations”—to a recipe, a set of instructions, substituting space/text for time/process.31 Consider, for instance, the commonsense computer science definition of software as a “set of instructions that direct a computer to do a specific task” and the OED definition of software as “the programs and procedures required to enable a computer to perform a specific task, as opposed to the physical components of the system.” Software, according to these definitions, drives computation. These definitions, which treat programs and procedures interchangeably, erase the difference between human readable code, its machine readable interpretation, and its execution. The implication is thus: execution does not matter—like in conceptual art, it is a perfunctory affair; what really matters is the source code. Relatedly, several new media theorists have theorized code as essentially and rigorously “executable.” Alexander Galloway, for instance, has argued powerfully that “code draws a line between what is material and what is active, in essence saying that writing (hardware) cannot do anything, but must be transformed into code (software) to be effective. . . . Code is a language, but a very special kind of language. Code is the only language that is executable. . . . Code is the first language that actually does what it says.”32 This view of software as “actually doing what it says” assumes no difference between source code and execution, instruction and result. Here the says is not accidental—although perhaps surprising coming from a theorist who argues in an article titled “Language Wants to Be Overlooked” that “to see code as subjectively performative or enunciative is to anthropomorphize it, to project it onto the rubric of psychology, rather than to understand it through its own logic of ‘calculation’ or ‘command.’ ”33 The statement “Code is the first language that does what it says” reveals that code has surprisingly— because of machinic, dead repetition—become logos. Like the king’s speech in Plato’s Phaedrus, it does not pronounce knowledge or demonstrate it—it transparently pronounces itself.34 The hidden signified—meaning, the father’s intentions—shines through and transforms itself into action. Like Faust’s translation of logos with “deed”—The spirit speaks! I see how it must read / And boldly write:
Wendy Hui Kyong Chun
“In the beginning was the Deed!”—software is word become action: a replacement of process with inscription that makes writing a live power by conflating force and law. Not surprisingly, this notion of source code as source coincides with the introduction of alphanumeric languages. With them, human-written, nonexecutable code becomes source code, and the compiled code becomes the object code. Source code thus is arguably symptomatic of human language’s tendency to attribute a sovereign source to an action, a subject to a verb. By converting action into language, source code emerges. Thus Galloway’s statement, “To see code as subjectively performative or enunciative is to anthropomorphize it, to project it onto the rubric of psychology, rather than to understand it through its own logic of ‘calculation’ or ‘command,’ ” overlooks the fact that to use higher-level alpha numeric languages is already to anthropomorphize the machine and to reduce all machinic actions to the commands that supposedly drive them. In other words, the fact that “code is law”— something Lawrence Lessig pronounces with great aplomb—is hardly profound.35 Code, after all, is “a systematic collection or digest of the laws of a country, or of those relating to a particular subject.”36 What is surprising is the fact that software is code, that code is—has been made to be—executable, and that this executa bility makes code not law, but rather every lawyer’s dream of what law should be: automatically enabling and disabling certain actions and functioning at the level of everyday practice. Code as law is code as police. Insightfully, Derrida argues that modern technologies push the “sphere of the police to absolute ubiquity.” The police weld together norm with reality; they “are present or represented everywhere there is force of law. . . . They are present, sometimes invisible but always effective, wherever there is preservation of the social order.”37 Code as law as police, like the state of exception, makes executive, legislative, and juridical powers coincide. Code as law as police erases the gap between force and writing, langue and parole, in a complementary fashion to the state of exception. It makes language abstract, erases the importance of enunciation, not by denying law, but rather by making logos everything. Code is exe cutable because it embodies the power of the executive. More
Crisis, Crisis, Crisis
generally, the dream of executive power as source lies at the heart of Austinian-inspired understandings of performative utterances as simply doing what they say. As Judith Butler has argued in Excitable Speech, this theorization posits the speaker as “the judge or some other representative of the law.”38 It resuscitates fantasies of sovereign—again executive—structures of power. It embodies “a wish to return to a simpler and more reassuring map of power, one in which the assumption of sovereignty remains secure.”39 Not accidentally, programming in a higher-level language has been compared to entering a magical world—a world of logos, in which one’s code faithfully represents one’s intentions, albeit through its blind repetition rather than its “living” status.40 As MIT professor Joseph Weizenbaum, creator of ELIZA and member of the famed MIT AI lab, has argued: The computer programmer . . . is a creator of universes for which he alone is the lawgiver. So, of course, is the designer of any game. But universes of virtually unlimited complexity can be created in the form of computer programs. Moreover, and this is a crucial point, systems so formulated and elaborated act out their programmed scripts. They compliantly obey their laws and vividly exhibit their obedient behavior. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.41 Weizenbaum’s description underscores the mystical power at the base of programming: a power both to found and to enforce. Auto matic compliance welds together script and force, again, code as law as police or as the end of democracy. As Derrida has underscored, the police is the name for the degeneration of democratic power. . . . Why? In absolute monarchy, legislative and executive powers are united. In it violence is therefore normal, conforming to its essence, its idea, its spirit. In democracy, on the contrary, violence is no longer accorded nor granted to the spirit of the police.
Wendy Hui Kyong Chun
Because of the presumed separation of powers, it is exercised illegitimately, especially when instead of enforcing the law, it makes the law.42 Code as logos and states of exception both signify a decay of the decay that is democracy. Tellingly, this machinic execution of law is linked to the emergence of a sovereign user. Celebrations of an all-powerful user/ agent—“you” as the network, “you” as producer—counteract concerns over code as law as police by positing “you” as the sovereign subject, “you” as the decider. An agent, however, is one who does the actual labor, hence agent as one who acts on behalf of another. On networks, the agent would seem to be technology, rather than the users or programmers who authorize actions through their commands and clicks. Programmers and users are not creators of languages, nor the actual executors, but rather living sources who take credit for the action. Similarly, states of exception rely on auctoritas. The auctor is one who, like a father who “naturally” embodies authority, authorizes a state of emergency.43 An auctor is “the person who augments, increases or perfects the act—or the legal situation—of someone else.”44 The subject that arises, then, is the opposite of the democratic agent, whose power stems from protestas. Hence, the state of exception, Agamben argues, revives the auctoritas as father, as living law: The state of exception . . . is founded on the essential fiction according to which anomie (in the form of auctoritas, living law, or the force of law) is still related to the juridical order and the power to suspend the norm as an immediate hold on life. As long as the two elements remain correlated yet conceptually, temporally, and subjectively distinct (as in republican Rome’s contrast between the Senate and the people, or in medieval Europe’s contrast between spiritual and temporal powers) their dialectic—though founded on a fiction—can nevertheless function in some way. But when they tend to coincide in a single person, when the state of exception, in which they are bound and blurred together,
Crisis, Crisis, Crisis
becomes the rule, then the juridico-political system transforms itself into a killing machine.45 The reference here to killing machines is not accidental. States of exception make possible a living authority based on an unliving (or, as my spell-check keeps insisting, an unloving) execution. This insistence on life also makes it clear why all those discussions of code anthropomorphize it, using words such as says or wants. It is, after all, as a living power that code can authorize. It is the father behind logos that shines through the code. To summarize, we are witnessing an odd dovetailing of the force of law without law with writing as logos, which perverts the perversion that writing was supposed to be (writing as the bastard “mere repetition” was defined in contrast to and as inherently endangering logos). They are both language at its most abstract and mystical, albeit for seemingly diametrically opposed reasons: one is allegedly language without writing; the other writing without language. This convergence, which is really a complementary pairing, because they come to the same point from different ends, puts in place an originary sovereign subject. This originary sovereign subject, however, as much as he may seem to authorize and begin the state of exception, is created belatedly by it. Derrida calls sovereign violence the naming of oneself as sovereign—the sovereign “names itself. Sovereign is the violent power of this originary appellation,” an appellation that is also an iteration.46 Judith Butler similarly argues that through iterability the performative utterance creates the person who speaks it. Further, the effect of this utterance does not originate with the speaker, but rather with the community she or he joins through speaking.47 The programmer/user is produced through the act of programming. Code as logos depends on many circumstances, which also undermine the authority of those who would write.
Sources, After the Fact Source code as source—as logos—is a highly constructed and rather dubious notion, not in the least because, as Friedrich Kittler
Wendy Hui Kyong Chun
has most infamously argued, “there is no software,” for everything, in the end, reduces to voltage differences.48 Similarly (and earlier), physicist Rolf Landauer has argued, “There is really no software, in the strict sense of disembodied information, but only inactive and relatively static hardware. Thus, the handling of information is inevitably tied to the physical universe, its contest and its laws.”49 This construction of source code as logos depends on many historical and theoretical, as well as physical, erasures. Source code, after all, cannot be run unless it is compiled or interpreted, which is why early programmers called source code “pseudo-code.”50 Execution, that is, a whole series of executions, belatedly makes some piece of code a source. Source code only becomes a source after the fact. Source code is more accurately a re-source rather than a source. Source code becomes the source of an action only after it expands to include software libraries, after it merges with code burned into silicon chips, and after all these signals are carefully monitored, timed, and rectified. It becomes a source after it is rendered into an executable: source code becomes a source only through its destruction, through its simultaneous nonpresence and presence.51 Even executable code is no simple source: it may be executable, but even when run, not all lines are executed, because commands are read in as necessary. The difference between executable and source code brings out the ways in which code does not simply do what it says—or more precisely, does so in a technical (crafty) manner.52 Even Weizenbaum, as he posits the programmer as all-powerful, also describes him as ignorant because code as law as police is a fiction. The execution of a program more properly resembles a judicial process: A large program is, to use an analogy of which Minsky is also fond, an intricately connected network of courts of law, that is, of subroutines, to which evidence is transmitted by other subroutines. These courts weigh (evaluate) the data given to them and then transmit their judgments to still other courts. The verdicts rendered by these courts may, indeed, often do, involve decisions about what court has “jurisdiction” over the intermediate results then being
Crisis, Crisis, Crisis
manipulated. The programmer thus cannot even know the path of decision-making within his own program, let alone what intermediate or final results it will produce. Program formulation is thus rather more like the creation of a bureaucracy than like the construction of a machine of the kind Lord Kelvin may have understood.53 This complex structure belies the conceit of source code as conflating word and action. The translation from source code to executable is arguably as involved as the execution of any command. Compilation carries with it the possibility of deviousness: our belief that compilers simply expand higher-level commands—rather than alter or insert other behaviors—is simply that, a belief, one of the many that sustain computing as such. It is also a belief challenged by the presence and actions of viruses, which—as Jussi Parikka has argued—challenge the presumed relationship between invisible code and visible actions and the sovereignty of the user.54 Source code as source is also the history of structured programming, which sought to reign in “go-to-crazy” programmers and self-modifying code. A response to the much-discussed “software crisis” of the late 1960s, its goal was to move programming from a craft to a standardized industrial practice by creating disciplined programmers who dealt with abstractions rather than numerical processes. This dealing with abstractions also meant increasingly separating the programmer from the machine. As Kittler has infamously argued, we no longer even write.55 With “data-driven programming”—in which solutions are generated rather than produced in advance—it seems we even no longer program. Code as logos would seem language at its most abstract because, like the state of exception, it is language in pure state. It is language without parole, or, to be more precise, language that hides—that makes unknowable—parole. To be clear, I am not valorizing hardware over software, as though hardware naturally escapes this drive to make space signify time. Hardware too is carefully disciplined and timed in order to operate “logically”—as logos. As Philip Agre has emphasized, the digital abstraction erases the fact that gates have “directionality in
Wendy Hui Kyong Chun
both space (listening to its inputs, driving its outputs) and in time (always moving toward a logically consistent relation between these inputs and outputs).”56 This movement in time and space was highlighted nicely in early forms of “regenerative” memory, such as the Williams tube. The Williams tube used televisual CRT technology not for display, but for memory: when a beam of electrons hits the phosphor surface, it produces a charge that persists for .2 seconds before it leaks away. Therefore, if a charge can be regenerated at least five times per second, it can be detected by a parallel collector plate. Key here—and in current forms of volatile memory involved in execution—is erasability. Less immediately needed data do not need to regenerate, and John von Neumann intriguingly included within the rubric of “memory” almost all forms of data, referring to stored data and all forms of input and output as “dead” memory. Hence now in computer speak, one reverses common language and stores something in memory. This odd reversal and the conflation of memory and storage gloss over the impermanence and volatility of computer memory. Without this volatility, however, there would be no memory.57 This repetition of signals both within and outside the machine makes clear the necessity of responsibility— of continuous decisions—to something like safety (or saving), which is always precarious. It thus belies the overarching belief and desire in the digital as simply there—anything that is not regenerated will become unreadable—by also emphasizing the importance of human agency, a human act to continually save that is concert with technology. Saving is something that technology alone cannot do—the battle to save is a crisis in the strongest sense of the word. This necessary repetition makes us realize that this desire for safety as simple securing, as ensured by code, actually puts us at risk of losing what is valuable, from data stored on old floppy drives to CDs storing our digital images because, at a fundamental level, the digi tal is an event rather than a thing.58 It also forces us to engage with the fact that if something stays in place, it is not because things are unchanging and unchangeable, but rather because they are continually implemented and enforced. From regenerative mercury- delay line tubes to the content of digital media, what remains is not what is static, but rather that which is continually repeated.
Crisis, Crisis, Crisis
This movement does not mean that there are no things that can be later identified as sources, but rather than that constant motion and care recalls things in memory. Further, acknowledging this necessary repetition moves us away from wanting an end (because what ends will end) and toward actively engaging and taking responsibility for everything we want to endure. It underscores the importance of access, another reason for the valorization of digitization as a means of preservation. To access is to preserve. By way of conclusion, I suggest that this notion of constant care can exhaust the kind of exhaustion encapsulated in “search overload syndrome.” The experience of the undecidable—with both its reliance on and difference from rules—highlights the fact that any responsibility worthy of its name depends on a decision that must be made precisely when we know not what to do. As Thomas Keenan eloquently explains, “The only responsibility worthy of the name comes with the removal of grounds, the withdrawal of the rules or the knowledge on which we might rely to make our decisions for us. No grounds means no alibis, no elsewhere to which to refer the instance of our decision.”59 Derrida similarly argues, “A decision that would not go through the test and ordeal of the undecidable would not be a free decision; it would only be the programmable application or the continuous unfolding of a calculable process.”60 The undecidable is thus freedom in the more rigorous sense of the word—a freedom that comes not from safety but rather from risk. It is a moment of pause that interrupts our retroviral dissemination and induces the madness that, as Kierkegaard has argued, accompanies any moment of madness. The madness of a decision, though, differs from the madness described by Microsoft, which stems from the constant deferral of a decision. This deferral of decision stemming from a belief in information as decision catches us in a deluge of minor seeming decisions that defer our engagement with crisis—or rather renders everything and thus nothing a crisis. To exhaust exhaustion, we need to exhaust too the desire for an end, for a moment in which things can just stand still. To exhaust exhaustion we must also deal with—and emphasize— the precariousness of programs and their predictions. That is, if they are to help us save the future—to help us fight the exhaustion of planetary reserves, and so on—they can do so only if we use the
Wendy Hui Kyong Chun
gap between their future predictions and the future not to dismiss them, but rather to frame their predictions as calls for responsibility. That is, “trusting” a program does not mean letting it decide the future or even framing its future predictions as simply true, but instead acknowledging the impossibility of knowing its truth in advance while nonetheless responding to it. This is perhaps made most clear through the example of global climate models, which attempt to convince people that something they can’t yet experience, something simulated, is true. (This difficulty is amplified by the fact that we experience weather, not climate—like capital, climate, which is itself the product of modern computation, is hard to grasp.) Trusted models of global mean temperature by organizations such as Geophysical Fluid Dynamics Laboratory (GFDL) “chart” changes in mean temperature from 1970 to 2100.61 Although the older temperatures are based on historical data, and thus verifiable, the future temperatures are not. This suturing of the difference between past and future is not, however, the oddest thing about these models and their relation to the future, although it is certainly the basis from which they are most often attacked. The weirdest and most important thing about their temporality is their hopefully effective deferral of the future: these predictive models are produced so that if they are persuasive and thus convince us to cut back on our carbon emissions, then what they predict will not come about. Their predictions will not be true or veri fiable. This relationship is necessary because by the time we know whether their predictions are true or not, it will be too late. (This is perhaps why the George W. Bush administration supported global climate change research: by investigating the problem, building better models, they bought more time for polluters.) I stress this temporality not because I’m a climate change denier—the fact that carbon monoxide raises temperature has been known for more than a century—but because, by engaging this temporality in terms of responsibility, we can best respond to critics who focus on the fallibility of algorithms and data, as if the gap between the future and future predictions was reason for dismissal rather than hope. This mode of deferring a future for another future is an engagement with the undead of information. The undead of information haunts the past and the future; it is itself a haunting. As Derrida
Crisis, Crisis, Crisis
explains, “The undecidable remains caught, lodged, as a ghost . . . in every decision, in every event of decision. Its ghostliness . . . deconstructs from within all assurance of presence, all certainty or all alleged criteriology assuring us of the justice of a decision, in truth of the very event of a decision.”62 This undeadness means that a decision is never decisive, that it can always be revisited and reworked. Repetition is not simply exhaustion, not simply repetition of the same that uses up its object or subject. What can emerge positively from the linking of crisis to networks—what must emerge from it if we are not to exhaust ourselves and our resources—are continual ethical encounters between self and other. These moments can call forth a new future, a way to exhaust exhaustion, even as they complicate the deconstructive promise of responsibility.
Notes 1. Adrian Mackenzie and Theo Vurdubakis, conference organizers, overview to Codes and Conduct Workshop, Institute for Advanced Studies, University of Lancaster, November 19–20, 2007, www.lancs.ac.uk/ias/ annualprogramme/protection/workshop2/index.htm. 2. See Giorgio Agamben, State of Exception, trans. Kevin Attell (Chicago: University of Chicago Press, 2005). 3. As Barbara Johnson notes in her explanation of Jacques Derrida’s critique of logocentrism, logos is the “image of perfectly self-present meaning . . . , the underlying ideal of Western culture. Derrida has termed this belief in the self-presentation of meaning, ‘Logocentrism,’ for the Greek word Logos (meaning speech, logic, reason, the Word of God).” Barbara Johnson, “Translator’s Introduction,” in Dissemination, trans. Barbara Johnson (Chicago: University of Chicago Press, 1981), ix. 4. Oxford English Dictionary, 2nd ed., s.v., “crisis.” 5. Bill Gates, The Road Ahead (New York: Viking, 1995). 6. John Perry Barlow, “A Declaration of the Independence of Cyberspace,” Electronic Frontier Foundation (EFF), February 9, 1996, http:// w2.eff.org/. 7. See Wendy Hui Kyong Chun, Control and Freedom: Power and Paranoia in the Age of Fiber Optics (Cambridge, Mass.: MIT Press, 2006). 8. See Al Gore, “Forging the New Athenian Age of Democracy,” Intermedia 22, no. 2 (1994): 14–16; and U.S. Supreme Court Decision, Reno v. ACLU No. 96-511, June 26, 1997, http://caselaw.lp.findlaw.com/. 9. For more on enlightenment as a stance of how not to be governed
Wendy Hui Kyong Chun
like that, see Michel Foucault, “What Is Critique?” in What Is Enlightenment?, ed. James Schmidt (Berkeley: University of California Press, 1996), 382–98. 10. For examples, see George P. Landow, Hypertext: The Convergence of Contemporary Critical Theory and Technology (Baltimore, Md.: Johns Hopkins University Press, 1992); Sherry Turkle, Life on the Screen: Identity in the Age of the Internet (New York: Simon & Schuster, 1995), and Women and Performance: A Journal of Feminist Theory 17 (2007). 11. Senator Daniel R. Coats argued during congressional debate over the Communications Decency Act that “perfunctory onscreen warnings which inform minors they are on their honor not to look at this [are] like taking a porn shop and putting it in the bedroom of your children and then saying ‘Do not look.’ ” Department of Justice Brief Filed with the Supreme Court 21 (1997) No. 96-511, http://groups.csail.mit.edu/mac/ classes/6.805/articles/cda/reno-v-aclu-appeal.html. 12. “Godwin’s Law,” http://en.wikipedia.org/wiki/Godwin’s_law. 13. For more on this, see Chun, Control and Freedom. 14. Kang Hyun-Kyong, “Cell Phones Create Youth Nationalism,” Korea Times Online, May 12, 2008, http://koreatimes.co.kr/. 15. See McKenzie Wark, “The Weird Global Media Event and the Tactical Intellectual,” in New Media, Old Media: A History and Theory Reader, ed. Wendy Hui Kyong Chun and Thomas Keenan (New York: Routledge, 2005), 265–76; and Geert Lovink, “Enemy of Nostalgia, Victim of the Present, Critic of the Future: Interview with Peter Lunenfeld,” Nettime Mailing List Archives, July 31, 2000, www.nettime.org/ Lists-A rchives/. Lovink contends, “Because of the speed of events, there is a real danger that an online phenomenon will already have disappeared before a critical discourse reflecting on it has had the time to mature and establish itself as institutionally recognize knowledge.” Geert Lovink, My First Recession (Rotterdam: V2/NAI, 2003), 12. 16. Wark, “Weird Global Media Event,” 265. 17. Wendy Hui Kyong Chun, “The Enduring Ephemeral, or the Future Is a Memory,” Critical Inquiry 35 (Autumn 2008): 148–71. 18. Ursula Frohne, “Screen Tests: Media, Narcicissm, Theatricality, and the Internalized Observer,” CTRL [SPACE]: Rhetorics of Surveillance from Bentham to Big Brother, ed. Thomas Y. Levin et al. (Cambridge, Mass.: MIT, 2002), 252. 19. Jane Feuer, “The Concept of Live Television: Ontology as Ideology,” Regarding Television: Critical Approaches, ed. E. Ann Kaplan (Frederick, Md.: University Publications of America, 1983), 12–22. 20. Mary Ann Doane, “Information, Crisis, Catastrophe,” in Logics of
Crisis, Crisis, Crisis
Television: Essays in Critical Criticism, ed. Patricia Mellencamp (Bloomington: Indiana University Press, 1990), 222. 21. Ibid., 238. 22. Thomas Y. Levin, “Rhetoric of the Temporal Index: Surveillant Narration and the Cinema of ‘Real Time,’ ” CTRL [SPACE], 578–93. 23. See Tara McPherson, “Reload: Liveness, Mobility and the Web,” in The Visual Culture Reader, 2nd ed., ed. Nicholas Mirzoeff (New York: Routledge, 2002), 458–70; and Alexander Galloway, ch. 2, Protocol: How Control Exists after Decentralization (Cambridge, Mass.: MIT Press, 2004). 24. See “Osama Bin Laden Virus Emails,” Hoax-Slayer, last updated May 10, 2011, www.hoax-slayer.com/. 25. See D. Liben-Nowell and J. Kleinberg, “Tracing Information Flow on a Global Scale Using Internet Chain-Letter Data,” Proceedings of the National Academy of Sciences 105, no. 12 (March 25, 2008): 4633–38. 26. Walter Benjamin, “The Story Teller,” in Illuminations (New York: Harcourt Brace Jovanovich, 1969), 90. 27. Jacques Derrida, “Force of Law: The Mystical Foundation of Authority,” Acts of Religion, ed. Gil Anidjar (New York: Routledge, 2002), 252. 28. Agamben, State of Exception, 7. 29. Ibid., 40. 30. Ibid., 36. According to Agamben, “The state of exception is an anomic space in which what is at stake is a force of law without law (which should therefore be written: force-of-law). Such a ‘force-of-law,’ in which potentiality and act are radically separated, is certainly something like a mystical element, or rather a fictio by means of which law seeks to annex anomie itself” (Ibid., 39). 31. See Michael Mahoney, “The History of Computing in the History of Technology,” Annals of the History of Computing 10 (1988): 121; and Adrian Mackenzie, Cutting Code: Software and Sociality (New York: Peter Lang, 2006), 169. 32. Alexander Galloway, Protocol: How Power Exists After Decentrali zation (Cambridge, Mass.: MIT Press, 2004), 165–66. Emphasis in original. Given that the adjective executable applies to anything that “can be executed, performed, or carried out” (the first example of “executable” given by the OED is from 1796), this is a strange statement. 33. Alexander Galloway, “Language Wants to Be Overlooked: Software and Ideology,” Journal of Visual Culture 5, no. 3 (2005): 321. 34. See Jacques Derrida’s analysis of The Phaedrus: Jacques Derrida, “Plato’s Pharmacy,” in Dissemination, trans. Barbara Johnson (Chicago: University of Chicago Press, 1981), 134.
Wendy Hui Kyong Chun
35. See Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999). 36. Oxford English Dictionary, 2nd ed., s.v., “code.” 37. Derrida, “Force of Law,” 279, 278. 38. Judith Butler, Excitable Speech: A Politics of the Performative (New York: Routledge, 1997), 48. 39. Ibid., 78. 40. Fred Brooks, while responding to the disaster that was OS/360, also emphasized the magical powers of programming. Describing the joys of the craft, Brooks writes, “Why is programming fun? What delights may its practitioner expect as his reward? / First is the sheer joy of making things. . . . / Second is the pleasure of making things that are useful to other people. . . . / Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. . . . / Fourth is the joy of always learning, which springs from the nonrepeating nature of the task. . . . / Finally there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. . . . Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.” Fredrick P. Brooks, The Mythical Man-Month: Essays on Software Engineering (Reading, Mass.: Addison- Wesley Professional, 1995), 7–8. 41. Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: W. H. Freeman and Company, 1976), 115. 42. Derrida, “Force of Law,” 281. 43. Agamben, State of Exception, 82. 44. Ibid., 76. 45. Ibid., 86. 46. Derrida, “Force of Law,” 293. 47. Butler, Excitable Speech, 39. 48. Friedrich Kittler, “There Is No Software,” ctheory.net, October 18, 1995, www.ctheory.net/. 49. Rolf Landauer, “Computation: A Fundamental Physical View,” Physica Scripta 35, no. 1 (1987), 35. 50. For instance, The A-2 Compiler System Operations Manual, pre-
Crisis, Crisis, Crisis
pared by Richard K. Ridgway and Margaret H. Harper under the direction of Grace M. Hopper (Philadelphia: Remington Rand, 1953), explains that a pseudo-code drives its compiler, just as “C-10 Code tells UNIVAC how to proceed. This pseudo-code is a new language which is much easier to learn and much shorter and quicker to write. Logical errors are more easily found in information than in UNIVAC coding because of the smaller volume” (1). 51. Jacques Derrida stresses the disappearance of the origin that writing represents: “To repeat: the disappearance of the good-father-capital- sun is thus the precondition of discourse, taken this time as a moment and not as a principle of generalized writing. . . . The disappearance of truth as presence, the withdrawal of the present origin of presence, is the condition of all (manifestation of) truth. Nontruth is the truth. Nonpresence is presence. Differance, the disappearance of any originary presence, is at once the condition of possibility and the condition of impossibility of truth. At once.” Derrida, “Plato’s Pharmacy,” 168. 52. Compilation creates a logical—a crafty—relation rather than a numerical one—one that cannot be compared to the difference between decimal or binary numbers, or numerically equivalent equations, for it involves instruction explosion and the translation of symbolic into real addresses. For example, consider the instructions needed for adding two numbers in PowerPC assembly language: li r3,10 *load register 3 with the number 10 li r4,20 *load register 4 with the number 20 add r5,r4,r3 *add r3 to r4 and store the result in r5 stw r5,sum(rtoc) *store the contents of r5 (i.e. 30) *into the memory location called “sum” blr *end of this piece of code 53. Joseph Weizenbaum, Computer Power, 234. 54. Jussi Parikka, Digital Contagions: A Media Archaeology of Computer Viruses (New York: Peter Lang, 2007). 55. Kittler, “There Is No Software.” 56. Philip E. Agre, Computation and Human Experience (Cambridge: Cambridge University Press, 1997), 92. When a value suddenly changes, there is a brief period in which a gate will give a false value. In addition, because signals propagate in time over space, they produce a magnetic field that can corrupt other signals nearby (“crosstalk”). This schematic erases all these various time-and distance-based effects by rendering space blank, empty, and banal. 57. Memory is not static, but rather an active process. A memory must be held in order to keep it from moving or fading. Memory does not equal storage. Although one can conceivably store a memory, storage usually
Wendy Hui Kyong Chun
refers to something material or substantial, as well as to its physical location: a store is both what and where it is stored. According to the OED, to store is “to furnish, to build stock.” Storage or stocks always look toward the future. Memory stems from the same Sanskrit root for “martyr.” Memory calls for an act of commemoration or renewal of what is stored. Memory is not a source, but an act, and by focusing on either memory or real time as sources, we miss the importance of this and other actions, such as the transformation of information into knowledge, of code into vision. Since the coded “source” of digital media can only operate by being constantly refreshed, degenerated, and regenerated, the critical difficulty of digital media thus stems less from its speed or source, but rather from the ways in which it runs. 58. Wolfgang Ernst thus argues that new media is a time-based medium. See Wolfgang Ernst, “Dis/continuities: Does the Archive Become Metaphorical in Multi-Media Space?,” in New Media, Old Media: A History and Theory Reader, ed. Wendy Hui Kyong Chun and Thomas Keenan (New York: Routledge, 2006), 105–23. 59. Thomas Keenan, Fables of Responsibility: Aberrations and Predicaments in Ethics and Politics (Stanford, Calif.: Stanford University Press, 1997), 1. 60. Derrida, “Force of Law,” 252. 61. Geophysical Fluid Dynamics Laboratory (GFDL), “NOAA GFDL CM2.1 Model,” www.gfdl.noaa.gov/video/gfdlglobe_tref_d4h2x1_1970 _2100_30f_720x480.mov. 62. Derrida, “Force of Law,” 253.
9 7 0
They Are Here Timothy Morton
In this chapter, I analyze Toni Basil’s video for the Talking Heads’ song “Crosseyed and Painless” (1980). I strive to show that the way in which the video stages the proximity of poor African Americans to the broken tools of modernity, far from valorizing their immiseration, offers a way to think black environmental consciousness as symptomatic of and central to the emerging ecological age, the age of global warming. I do this by thinking about nonhumans. Thinking antiracism has often proceeded by thinking within lines that preestablish thin and rigid boundaries between the human and nonhuman realms (subject and object, freedom and unfreedom, and so on). The task has quite rightly been to de-objectify, to de-commodify. Instead, this chapter proceeds by changing the sense of what is meant by the term object. I contend that this procedure provides a firmer logical structure for thinking race and antiracism. This is precisely because the lineage that brought us slavery and racism is also the lineage that brought us the anthropocentric boundary between human and nonhuman.1 This boundary is predicated in part on another rigid distinction between person and thing. Any attempt to make the human−nonhuman boundary a little less thin, a little less rigid, risks being seen as hostile to the lineage of antiracism. And any talk of objects risks raising the specter of objectification. Yet without this risk, a subtle anthropocentrism is reproduced, an anthropocentrism that is now under strain from science, from philosophy, and from the reality of a globally warming world.2 Is it possible to think antiracism without anthropocentrism? 9 167 0
Timothy Morton
This chapter is an incomplete, necessarily loose attempt at such a project. Before she became famous with “Hey Mickey,” before she was asked by an enthusiastic David Byrne to direct the memorable and often played video for Talking Heads’ “Once in a Lifetime,” Toni Basil directed the video for their song “Crosseyed and Painless.”3 This chapter shows how this video and the song are part of the anthem of global anxiety, the overwhelming sensation that underlies ecological thinking like a note that no one wants to hear, a certain high-frequency hum like the sound of a malfunctioning electric pylon. This is the sound of the end of the world, but also of the beginning of history. In so doing, the chapter considers the stone that the ecophilosophy builders cast away, the broken tool— the one golden key to the fact of coexistence with nonhumans. This consideration opens onto a deepened examination of the human attunement of anxiety, our home base, or rather, our contemporary “unhome,” uncanny and strange. The chapter arrives there by using the insights of the emerging philosophy called object-oriented ontology (OOO). The video narrates the story of a young man who is seduced into selling drugs by a street hustler, moving aside from the normal routine of work and play. He is threatened by a gangster. He finds a girlfriend and is disturbed by her sexual potency. Eventually they split up, and he is left to wander the streets again. Toni Basil employed the Electric Boogaloos, a stunning group of body poppers from Fresno and Long Beach. David Byrne paid between $10,000 and $15,000 out of pocket for the video. The Electric Boogaloos comprised Timothy “Popin’ Pete” Solomon, Robot Dane, Skeeter Rabbit, McTwig (Twig Imai, the gentleman with the cigar), “Scarecrow Scally” Allen, and Ana Marie “Lollipop” Sanchez. They specialized in early hip-hop dance styles such as body popping and locking—people becoming robots becoming people. The first-ever moonwalk on video is by Skeeter Rabbit, not Michael Jackson at the MTV awards, as is often thought. Jackson learned it from Twig Imai and Skeeter. The dance has to do with working with objects, working with tools: bouncing a basketball, tuning to the heft of the ball that asks to be bounced just that way (Figure 7.1). The larger frame is of
They Are Here
Figure 7.1. Lost My Shape—Trying to Act Casual. In this opening scene from the Talking Heads’ music video, Crosseyed and Painless (dir. Toni Basil), we witness a dance of the objects: Skeeter Rabbit (center) finds himself surrounded by a host of humans and nonhumans. Reproduced by permission of MSA Agency.
humans tuning to industry, wearing masks, acting like machines. Still wider, the dance has do with African Americans trying to tune to the racist world: trying to fit, trying to act casual, feeling like an accident, as Byrne’s lyrics put, moving to the rhythm of banking and high finance. Working in a factory. Making money, trying to love. Flooded with anxiety and fear. Objectified. They are bearing a message. An overwhelming environmental threat hangs over these Afri can Americans, what in Buddhism is called “all-pervasive pain.” All-pervasive pain is not just the pain of being stabbed or of not getting that money you wanted by trying to sell those drugs, but what one teacher calls “an environmental creepiness.”4 Or as the song says, Isn’t it weird. This is the pain of being in what Buddhism calls a “realm of existence”—being situated, phenomenologically,
Timothy Morton
somewhere or other. Environmental racism as the experience of environmentality as such, of suffering as environmental. Not some blank “natural” environment (always a human product, a product of violence), but one that is charged with a certain irreducible appearance, a certain weirdness. At the back of one’s mind there resides an overhanging sense of dread. This dread is what Emmanuel Levinas calls the il y a, the “there is,” or what this song calls the “there was”—There was a line / There was a formula.5 A story begins: Once upon a time, there was . . . But what? This is what narrative theory calls aperture, the feeling that something is beginning. Aperture is precisely the feeling that we don’t know yet; we can only work by hindsight. It seems to come from everywhere. The dancers are suspended in white space, as if nothing means enough, or anything, any more. Then all of a sudden they are in their world of cars and train tracks. Then back to white. The rugs keep being pulled out. Surrounded, they are, literally, by whiteness, a void that masks a deeper claustrophobia: a room that we discern in the tight shadows around the dancers, the square shape of the video frame. The video puts the characters in a claustrophobic space of whiteness, a waiting room with nothing in it, with no exit, nothing around it. “Crosseyed and Painless” is a superb example of funk, a broken blues without a story, without that four-chord trick, that twelve-bar narrative, just popping in and out, locking into that first section, like a needle stuck in the groove of a broken record. Funk evokes the repetition compulsion, returning again and again to the same part of the city, like Freud in his essay on the uncanny, over and over again to the same strange part of town, the part that is your home, made stranger by the constant popping dislocation of the groove.6 Funk burrows into that initial moment, the beginning of the blues sequence—the basic unhappiness that spawns the ironic enjoyment, the blue note. That chorus-like section that tries to fly from the sickening lurch of the verse, and seems for a few seconds to float above it, before descending back to uncanny home base, like a bird with a broken wing. No escape velocity can be achieved from the horrible gravity of the song, the centripetal torque emitted by the sharpened, shortened blues on heavy rotation.
They Are Here
Because this chapter makes frequent reference to David Byrne’s lyrics for “Crosseyed and Painless,” in a funk-like repetitive refrain, let us witness them here: CROSSEYED AND PAINLESS Lost my shape—Trying to act casual Can’t stop—I might end up in the hospital I’m changing my shape—I feel like an accident They’re back!—To explain their experience Isn’t it weird / Looks too obscure to me Wasting away / And that was their policy I’m ready to leave—I push the facts in front of me Facts lost—Facts are never what they seem to be Nothing there!—No information left of any kind Lifting my head—Looking for danger signs There was a line / There was a formula Sharp as a knife / Facts cut a hole in us There was a line / There was a formula Sharp as a knife / Facts cut a hole in us I’m still waiting . . . I’m still waiting . . . I’m still waiting . . . I’m still waiting . . . I’m still waiting . . . I’m still waiting . . . I’m still waiting . . . I’m still waiting The feeling returns / Whenever we close our eyes Lifting my head / Looking around inside The Island of Doubt—It’s like the taste of medicine Working by hindsight—Got the message from the oxygen Making a list—Find the cost of opportunity Doing it right—Facts are useless in emergencies The feeling returns / Whenever we close our eyes Lifting my head / Looking around inside
Timothy Morton Facts are simple and facts are straight Facts are lazy and facts are late Facts all come with points of view Facts don’t do what I want them to Facts just twist the truth around Facts are living turned inside out Facts are getting the best of them Facts are nothing on the face of things Facts don’t stain the furniture Facts go out and slam the door Facts are written all over your face Facts continue to change their shape
Working by hindsight, getting the message from the oxygen of a poisoned warming Earth, I argue that what seemed in 1980 like postmodern free play—Facts all come with points of view—turns out to be the disturbing truth of Lacan, that there is no meta language, realized in the age of ecology.8 Postmodernism was just the flashing neon sign on the tip of the iceberg. Irony becomes the food of phenomenological sincerity, the viscosity with which nonhumans stick to us, live in our DNA, are our DNA. Because, as that other great ’80s phenomenologist Buckaroo Banzai puts it, “Wherever you go, there you are.”9 Blues for a blue planet with no exit. The lyrics are clownish, ironic, spiked with a faint message that becomes louder, like radio interference you keep hearing at the back of the station you are tuned to. Lost my shape—Trying to act casual. That’s funny and ironic, but then: Can’t stop—I might end up in the hospital. I push the facts in front of me. Sharp as a knife. There was a formula. The caesuras in each line allow for a twist of anxiety, uncertainty. They cause a kind of epistemological gap that might be hiding an ontological gap—is that a void or is it obscuring something? The Island of Doubt—It’s like the taste of medicine: the unexpected twist in the second half of the line is as disturbing as it is funny. Moreover, the song seems to be talking about itself: talking about its use of caesura and ironic gaps—the knife, the
They Are Here
device that makes the cut: Facts cut a hole in us. Analogous things happen in the video. Something keeps on coming, seeping through the gaps torn by the hood’s knife. The world is torn open but there is no beyond, just another being, waiting with a knife, a sexy girl, a compelling memory, the promise of wealth, a lonely street, a suit, the smell of money, a dust mask, a briefcase, a rear windshield, an umbrella, a newspaper, a car. The racist is the one who fills the gap between how a human appears and what a human is with some kind of metaphysical paste. It is strictly impossible, and thus ontological and political violence is what racism mimes in order to bomb thinking back to an age before there were uncertain facts (data) rather than absolute facts (metaphysics). There is much to be said about that strange car at the close of Basil’s video, so casually sitting in the driveway, covered with a sheet (Figure 7.2). The young man (played by Skeeter) does not notice it as he shuffles up the street. Does not notice as it glows through a range of electrical colors: iridescent blue, green, yellow, orange, red, magenta. The car is a device, a quotidian machine, a tool, waiting for its user. Yet it is also an entity in its own right, not waiting for anything. This occluded, hidden aspect of the car seems to become visible in the way it glows, a horrible rainbow that no one sees, a rainbow not as a sign of promise but of a threat, an exis tential threat: Sharp as a knife. There was a formula. Something is (already) here; they are here. The withdrawn thing power of the car, ontologically beneath what object-oriented philosopher Graham Harman would call its “tool-being,” is unveiled for a second by the metaphorical fusion reaction of the video. Its hiddenness is on display, it hides in plain sight.10 The viewer wants to shout at Skeeter: “Look! For heaven’s sake just look at the car!” But one is on the hither side of the screen, and he is on the yonder side. Yet, for a second, it is as if this dramatic aesthetic fourth wall dissolves, as we glimpse tools whose opera tion was withdrawn throughout the video—the chroma-keying (color separation overlay is the BBC term), the use of a broken video camera and an unusual color control, Brian Eno’s fingers turning the knob so the car that is a not-car becomes the demonic rainbow
Timothy Morton
Figure 7.2. I’m Still Waiting. In this still from the Talking Heads’ music video, Crosseyed and Painless (dir. Toni Basil), the car is glowing green, and will soon shift to blue tones. It is as if the car is alive, or sentient—it certainly seems to be half-communicating, just as humans blush. But we are left disturbingly unsure. Reproduced by permission of MSA Agency.
in a sickening nowness of indeterminate dimension. The story is not over. The car glows knowingly, to no one, the fourth wall shattered, the video over. We have participated in a Dionysian ritual of coexistence, which is just what every tragedy really is. It’s over for us. But not for that car, or that kid, or that street, or that song, that fades rather than ending. Aperture without closure. Objects, objectification, and the status of tools are very much the issue of the video. Consider the fact that black people from the standpoint of racism, from the standpoint of environmental racism, are tools with souls (Aristotle’s definition of a slave, organon empsychon), components of industry living next to the garbage, working under the overpass, the littered railway tracks, looking at
They Are Here
normal life through the windshield of someone else’s car. Working- class black people are thrust up against the chatter of the foreign languages of nonhumans. They are in a disturbingly racialized position to experience “the mystery and melancholy of a street” (as Giorgio de Chirico put it), the melancholic uncanny of a world made of cloth. From this vantage point, they experience the difficulty of doing anything, paralyzed by the inertia of things, unable to cut through it with the knife of big business, stuck in the garbage, trying to push the facts in front of one. Haunted by illusion, lies, anxiety, the black working class knows the secret life of things, the way they are in excess of their social role. Yet inner space does not provide a refuge from the outer world. There is no escape from this implicitly racist environmentality: The feeling returns / Whenever we close our eyes. Race, environment, nonhuman things are intertwined. Again, this is by no means to glamorize racist poverty. It is instead to reveal a certain structure of feeling in which all humans are implicated—complicities with nonhuman beings, illuminating, disturbing, phantasmagorical. And it is to argue that the black working-class American experience is central to that structure of feeling. What is happening? What is happening to the world? To ask Heidegger’s founding question, perhaps at the right moment—How on Earth does it stand with Being?11 What we are seeing, in that car shot, is the camera as its talks to the sky. This consists, more precisely, of an early 1980s conversation between broken tools, a burned-out tube in a Panasonic color video camera and a color control knob, a knob not present on contemporary video cameras at all. To accomplish this, we are also witness to the results of a car-shaped matte and a chroma-key setup via a vision mixer, probably the Grass Valley or CMX nonlinear video editor. Then the whole scene was filmed again to superimpose the shifting car-shaped color. Chroma-keying exploits how an entity need not be present: it can be zuhanden, in Heidegger’s terms, namely ready-to-hand, rather than vorhanden, present-at-hand, which is what happens when a tool ceases to function in a normal way—it suddenly springs to our attention. The car is replaced by a blue car-shaped matte that becomes transparent when fed through a vision mixer set up to key out that precise luminance of blue. In another shot, Skeeter wears a
Timothy Morton
blue face mask that is similarly keyed out to allow the overlaying of the smoking factory. The blue screen (nowadays, because of digital sensors, the green screen) disappears. It is a beautiful example of a Heideggerian tool, because it is “before your very eyes,” right in front of the camera. Yet it becomes transparent, allowing for different backgrounds to be placed behind the subject, in this case, the Electric Boogaloos and the car. Transparent for whom? Here is the magical phenomenon that the video evokes. The tool is transparent not for a human but for the camera itself, the camera that is composing the video image out of Skeeter Rabbit popping and locking down the street, and the glowing colors that have been chroma-keyed into the outline of the car cover. With its blue color channel keyed out via a luminance key in the vision mixer that matches the brightness of the particular chosen shade of blue, one tool, the camera, makes another tool, the screen or matte, invisible. No wonder the video is so uncanny, such a perfect complement to the song. The equipment itself creates a complex dance between visibility and invisibility, presence and absence, transparency and opacity, surroundings as tools that glint into presence, like fish in a dark ocean, then vanish. Sharp as a knife: a cut, a caesura, a break. A nonhuman has intruded from outside the narrative frame. This intrusion also possibly consists of another work of art altogether, a thin slice of something like Eno’s 1981 video Mistaken Memories of Mediaeval Manhattan.12 For it may well be Eno’s broken camera that causes this astonishing smooth blend from blue to pink (Figure 7.3). Or at any rate, what we see in the video is Byrne’s and Basil’s homage to Eno’s broken tool. (Eno was the producer of Talking Heads’ Remain in Light, the album that contains “Crosseyed and Painless.”) As with his ambient music experiments, which emerged from a broken record player that wouldn’t increase its volume, Mistaken Memories plays with a broken camera left on its side in floods of Manhattan light. Eno’s video is a view of Manhattan sideways on, a broken, nonhuman Manhattan inhabited by light and camera tubes, phosphor screens, and building surfaces. This broken tool generated an anamorphic view of the same space, which lost its shape: ana, the Greek prefix, means “un-,” while morphic means “pertaining to shape.” The thing is un-shaping, contorting away
They Are Here
Figure 7.3. Panasonic PK-600 video camera. Note the color controls, which gave Toni Basil and Brian Eno the ability to modulate color live. Photograph courtesy of Timothy Morton.
from habitual human grasping and use. Harman has argued that the broken tool reveals something extraordinary about things— that they are never exhausted by their relation or by their use or by their proximity to other things, including humans. Broken tools force us to comprehend a nonhuman world in a physical, even nonconceptual way. The molten car and the color-shifting dancers make a mockery of racist metaphysics, where appearance is inseparably glued to what a being is. What we see in the iridescent car and in the chroma-keyed outlines of the color-shifting humans is a conversation between photons and the electron stream from the gun in the video camera tube, undoubtedly a Vidicon tube (Figure 7.4). The photoconducting surface at the front of the tube, probably made of selenium, is activated by photons incident upon it. This surface is then scanned by an electron gun. What happens when we view the picture is a conversation
Timothy Morton
Figure 7.4. Diagram of the Vidicon camera tube. A nonhuman drama of quantum events plays out in the tube. Photograph courtesy of Encyclopaedia Britannica/UIG via Getty Images. Reproduced by permission.
between the classical physical level on which humans mechanically open access to the photoresistant surface behind the camera lens, and the quantum level that determines the picture. This quantum level involves the phosphor screen to which the camera is hooked up, and the television screen that displays the final video. It is at this quantum level that we glimpse something decisively nonhuman. Moreover, since the classical level may be thought simply as an averaged-out, smoothed version of the quantum weirdness we shall explore soon, that level too is saturated, quite literally, with the nonhuman. Indeed, many contemporary younger quantum theorists are beginning to wonder aloud whether the classical level is drastically less real than the quantum, because larger and larger things (fullerenes [carbon molecules in a geodesic dome], a tiny tuning fork that is visible to the naked eye, diamonds), can be made to exhibit quantum behavior.13
They Are Here
Why is a brief survey of the quantum scale events incident in the shot significant for this essay? Because it shows the depth, complexity, and manifold nonhuman levels of the assemblage light– car–Skeeter Rabbit–street–lens–photoconducting screen–electron spray–electron gun–coaxial cable–electron gun–electron spray– phosphor screen–David Byrne–Brian Eno–Toni Basil–v iewer. Weird distances and ambiguities are involved, strange accidents, and a history of nonhuman things that is wiped like a VHS tape when we only think, “It’s a video of ‘Crosseyed and Painless’ with some special effects.” This history is not just for us, but rather for a shoal of sparkling sensual beings, the phenomenological fish that Edmund Husserl discovered, but which OOO applies to any interaction between any entities whatsoever. Husserl discovered that the vast ocean of reason that Immanuel Kant had opened up, the third dimension of thought that is the condition of possibility for thinking and understanding, was teeming with all kinds of phenomena: hoping, wishing, asserting, love, hate, judging.14 These intentional objects are like fish swimming in the ocean, consisting of both a mind and its immanent object, locked together as if with a coaxial cable. In the same way we could see the chroma-keyed car cover and Skeeter with his factory-displaying mask on, and Ana Marie Sanchez lying sideways below Skeeter as he moonwalks across the museum steps, as phenomenological fish in the sensual interspace between video camera, vision mixer, and monitor. These entities are like in a kluge-like assemblage mind, not so different from the kluge that is the human brain, made of all kinds of primate, reptile, fish, and sponge components. These quasi-thoughts are what the song calls “facts.” Somehow the video restores to the term fact the notion of making and crafting (Latin, facere). It does so by narrating the birth and death of shoals of fact fish. In the cathode ray tube, the electron stream is made of quanta (literally “packets”), which is to say units of a certain kind that excite electrons in the phosphor, another kind of unit. To do this, they must deflect, smack into, penetrate. At this level, seeing and measuring are haptic. To perceive at this level means to hit with another quantum. Thus at this level there is no difference between
Timothy Morton
seeing a bemused philosopher seeing a billiard ball and that billiard ball smacking another one. This insight helps us to understand why OOO objects are quite similar to one another, insofar as sentience is not that different from what we think of as physical, nonsentient interaction. In addition, at the quantum scale, it simply is not possible to “see” before the smack. This gives rise to complementarity, the deep reason for Heisenberg’s uncertainty principle and the Schrödinger equations.15 To see is to change. The quantum is withdrawn “before” (but does it even make sense to speak of linear time here?) it is measured: working by hindsight. The camera tube is already damaged, before it is considered damaged by a human. A photon hits the photoresistant screen. A jump in energy causes an electron and its hole partner to be released into the conduction band. Many times this happens, and the pattern is now detectable by the stream from the electron gun. Watching the subsequent video image, we are also watching a record of accidents. In a cathode ray tube, the crystal lattices of the phosphor screen contain little impurities. It is nothing other than the precipitate of abandoned object cathexes, as Freud says of the human ego—in this case, the trace chemicals or dopants that treated the lattice to make it nonhomogeneous and thus to prolong the emission time (afterglow).16 The camera tube sees by being damaged. Each registration of the electron from the gun is a little death, and the simultaneous birth of something new—a fresh electron liberated from its place in the crystal lattice. The liberation creates a little hole, which gives rise to an electronic level in what is poetically called the “forbidden gap.” The electron and the hole—known collectively as an exciton— floats about until it finds equilibrium again by slotting into an impurity center. It gives up energy as luminescence, de-exciting swiftly by scintillation, and de-exciting slowly by what is known, also poetically, as the “forbidden mechanism.” The mechanism is forbidden because it’s improbable that such scintillation could happen, except under excited conditions—and rarely even then. But like the Forbidden City, forbidden colors still exist, demonstrating how entities behave in ways that subvert the kinds of relations that are expected of them. The fragile, “impossible,” hallucinatory video image is produced in part by fleeting and improbable forbidden
They Are Here
mechanisms. A fragile quantum magic could easily be fried away to classical nothing by too many electrons incident on the crystal lattice. This is not a digital situation: it is analog. The forbidden gap and the forbidden mechanism are not totally out of bounds, just hard to access. This tells us that there are regions in objects such as crystal lattices that are opaque—they are obscure, not totally nonexistent. The gap is a symptom of a deeper entity withdrawn from access. Otherwise absolutely nothing could happen across it, because it is not even nothing—an oukontic nothing. The gap is a meontic nothing—a nothingness, a thingly nothing.17 This idea of a nothing that is the mask of a something, not the total absence of anything at all, becomes very significant as we proceed. We humans see a dot of color. In other words, we see a mistake, the memory of a mistake, the trace of scintillation, a mistaken memory. Memory in general is a kind mistake, a broken tool, the trace of accidents, anamorphosis, all the shocks that things are heir to.18 Memory in this case just is inscribed in a crystal lattice that lost its shape, deformed into new form. We see the colorful display of an instability that has been canceled, working by hindsight. We see the past, the afterglow: Working by hindsight. But at the quantum level what has taken place is a little drama of life and death, if for a moment I can indulge in what Jane Bennett has happily called a kind of “strategic anthropomorphism.”19 Or is it? Freud breaks the death drive down to a single-celled organism.20 But why stop there? DNA is evidently an unstable molecule that is trying to get rid of itself, accidentally and ironically reproducing itself as soon as it unzips. And why even stop there? DNA needs ribosomes, which need DNA—so to break the vicious circle there must have been an RNA world of RNA replicants hitched onto nonorganic replicants, perhaps silicate crystals.21 Are we at the level of life here or beyond it? Is life just a small, metaphysically dubious region of a much larger undead region neither living nor dead? And why stop at replicants such as RNA or those silicates? How can molecules replicate at all? Is there not some dance between stability and instability that is deeper still? When too many photons hit the photoconducting surface in the tube—say when Eno had left his newly acquired tool lying on
Timothy Morton
its side in a window in the full Manhattan sunshine—the tube “burns out.” The death of a thing is its successful translation— success is never absolute, in this object-oriented world, but good enough to alter (that is, to damage) the thing in question. A photon or an electron shocks a crystal lattice into life—life, meaning inconsistency, namely the continual search for death, which in these terms would be consistency. The disequilibrium of a crystal lattice is such that it is flooded with its own excited electrons that flow about until they find their resting place: sparks of color as death. We see the story of a crystal lattice shocked into releasing photons of certain frequencies, defined packets, units, quanta: we see traces of the shock in the gorgeous disturbing color. We see the past. Just as we cannot directly see death, only memories, pieces of paper in a wastebasket, some car keys, so the death happening in the tube cannot be seen directly.22 The electron from the gun dies, liberating the electron from the lattice, a lattice of yttrium oxide sulfide (the red channel), zinc sulfide and copper (the green channel), and zinc sulfide with a few parts per million of silver (the blue channel), or rather a gigantic assemblage of lattices, gigantic at least from an electron’s or photon’s point of view. Placing our eyes near the monitor, we humans see the trace of death in the video image. But there is also death at the quantum scale. Appearance is the trace of death, namely, the form of a thing, which just is the past: form is the past. The withdrawn essence of a thing on the other hand cannot be located in measurable ontically given space—not for us humans, not for the electron, not for the photon. This “before” of a photon is really a not-yet: the future that lies in wait inside the camera tube. The “essence” of the tube is the future, its appearance as burned-out electrons is the past. When too much past accumulates, the whole tube is burned out, which is to say that it crosses a certain threshold, but the threshold, like human death, is not thin or rigid. Still the tube records color, colors that are in fact quite lovely. It is truly a mistaken memory, as Eno’s Mistaken Memories makes clear. The memory trace of electrons liberated from a crystal lattice by too many incident photons. The tube is ruined from a human point of view, and from the standpoint of commodity circulation—you cannot sell a broken camera. But the tube is still operational, still agential. From the human
They Are Here
point of view it is a zombie, an undead being that scoffs at the rigid boundary between death and life. The tube is thus an alien being from an alien world—an alien world just behind the video itself, just to the side of Skeeter Rabbit, just to the side of the street. This being is not located in some beyond, but right “here.” I put here in quotation marks because the pastness of what the tube reveals undermines the temporal atomism that underwrites the metaphysics of presence, not by evaporating things into a flux, but by staging the intrusion of an alien being into a seemingly coherent world. It would be even better to say that this is evidence for the always-already, shifting and uneasy presence of an entity, a nonhuman, that car, that video tube, that Manhattan sky. Aliens in our midst. Or as Ian Bogost puts it, we are in their midst.23 Whose midst is it anyway? This is not my beautiful car. Lost my shape. I feel like an accident. Racism is not simply politically violent—it is a ridiculous ontological violence, because appearance is profoundly severed from the being of a thing, yet a thing is not just nothing at all. How something begins is precisely as an accident, as anamorphosis, as distortion.24 An excitation appears in a crystal lattice. A cut is made by a knife, or a photon, producing a gap in the real that is then seen as a tear through which light comes. A blob that used to be a car begins to glow, a car that was always-already a blob of color, a distortion of metal, plastic, oil, and drivers. Oil is an anamorphic distortion of algae and dinosaur bones. We can only see things backward, working by hindsight, reading clues, getting the message from the oxygen. As if we were able to think the conditions of the possibility of that video, working by hindsight—Kant’s discovery of a vast hidden realm of transcendental space and time within, behind the back of, thought. There followed Husserl’s discovery that this realm is not a cold empty realm of purity, but is teeming with phenomenological fish, which this chapter has equated with the phenomena seen not by humans but by the interobjective assemblage of video camera and vision mixer and monitor. Beneath this ocean of flashing fish moves the silent, horror-struck U-boat of Heideggerian Dasein through the cold darkness and opacity of Angst. And yet, in a further twist, it is as if traveling in a bathysphere released from the
Timothy Morton
U-boat, far beneath that gigantic ocean of a priori synthesis and intentional or sensual objects and (human) Dasein, we are invited by the video to detect an amazing coral reef of actually existing beings, already here, a Panasonic PK-600 video camera, Skeeter Rabbit, a burnt-out tube, the sky, David Byrne, someone’s fingers on the color control, a CMX 600 nonlinear video editor, Twig Imai, video manually sliced by razor blades, SMPTE timecode, Toni Basil. They’re back!—To explain their experience. Video colors are facts about photons translating electrons. Traces of the past, stories are inscribed in phosphor. An illusion- like trail of facts scintillates, a trail that fades over time. Facts are the shadows of things, their echo, thrown into the past from an impossible futurality. Facts are nothing on the face of things: what a perfect OOO insight. Facts, the factical, facticity, are on the side of illusion and shadow play, precisely because there are real entities, real things that they don’t touch. Underneath the Byrne and Eno postmodern frisson of unmeaning, there are entities: facts are precisely nothing on the face of them, distortions by their very nature. And these facts are, as the song says, useless in emergencies. The current ecological emergency confronts us with the non metaphysical intimacy of nonhuman beings. An emergency in which we grab for facts—but they might be broken, they might be lying. And furthermore, they are nothing on the face of things. They are a way not to see the face of things. They may be useful, but facts don’t exhaust the emergency that just is the existence and coexistence of things. The line is sung with a half-k nowingness, as if the narrator was double-, triple-checking his pocket for keys and wallet, obsessively, compulsively, to stave off anxiety, the sharp end of coexistence. The abyss of wishful thinking, a rope bridge suspended over the abyss of anxiety. A nonhuman panic sets in. Lifting my head—Looking for the danger signs. Facts don’t stain the furniture, yet facts go out and slam the door, and they lie. They twist the truth around. Or are they telling the truth? Between the flattened seventh and the tonic note of the funk sequence, there is nothing, not even nothing—an oukontic nothing, like the forbidden gap between electron levels, which an electron jumps across when excited by a photon in the crystal lattice on a phosphor screen. It is as if we are being shown two sides of the
They Are Here
same thing, over and over, or are those the same side of two different things? Isn’t it weird / Looks too obscure to me. Whose Möbius strip is this anyway? Why are we in this muddle? Because of things, because of nonhumans. Below the depths traversed by the Heideggerian U-boat, there is a gigantic coral reef of beings. Anxiety, Dasein, Being, they all depend on the always-already of these sparkling beings, bouncing up and down like balls or David Byrne. Beneath facticity, that is, beneath the correlationist being-in-the-world that is the primordial facticity, there are things—facticity itself is nothing on the face of things. I’m still waiting. The late eighteenth century was the age during which Western philosophy decided it could only talk about access to reality, not reality as such. It was also the beginning of the Anthropocene, the moment at which human history intersects with geological time, because since about 1780 humans begin to deposit a thin layer of carbon in Earth’s crust, which can now be found in deep lakes and Arctic ice. In 1945 came the Great Acceleration, a vast increase in the magnitude of the Anthropocene, marked by the deposition of a thin layer of radioactive materials in Earth’s crust courtesy of The Gadget, Little Boy, and Fat Man. The basic attunement of the Anthropocene is anxiety, which is precisely the feeling of the loss of world—the end of the world, but not as we thought, a great bang or a void, but a prolongation of things in synchrony with the disappearance of meaningful backdrop—and thus the disappearance of the foreground as such. The Island of Doubt—It’s like the taste of medicine. A totally white background, blank, white, a meontic nothing—nothing has enough meaning. The horrible familiarity and strangeness of anxiety, its uncanny creepiness that seems to lurk just off of the edge of our perception like a car in a driveway beside the street we’re walking on, or a car approaching in your wing mirror. U.S. car wing mirrors are object-oriented ontologists: they say, “Objects in mirror are closer than they appear.” The trouble with ecology is that it brings everything too close. Things become vivid, yet unreal, at the very same time and for the very same reasons. Underneath nihilism, not in spite of it but through it, downward, there are things. The postmodern hesitation and luxuriance in the slide of signification is a soothing elixir that blocks access
Timothy Morton
to these beings that exist underneath nihilism. The postmodern island of doubt is a soothing oasis in the ocean of anxiety. Nihilism itself is a thicker version of this island of doubt, like a layer of seaweed that covers over the iridescent, painfully sharp beings that sparkle beneath the U-boat of Heidegger. I’m still waiting. The feeling of beginning, what is called aperture, the sense of an opening. It is this sense of beginning, which just is the pure givenness that underscores the Kantian aesthetic dimension, which the Talking Heads’ video exemplifies. The car winks at us, slightly provocatively—it is the one that is still waiting, not just human me, not just the narrator or Skeeter Rabbit. In order to have the attunement, the Stimmung of beauty, that perfect experience of one’s unconditional abyss of reason, the frightening fact that we can indeed think beyond the human, beyond the light cone of Herman Minkowski, who geometrically proved the theory of rela tivity. Outside the light cone that propagates from events in the universe, objects cannot be specified as happening now, or then, or later, and we cannot be sure about where they are happening either.25 Yet we can think them. We can think thirty million years ago, we can think the Oort cloud at the edge of the solar system, we can think events that cannot be located in space-time anywhere at all, events that actually exist. In order for all this, there is already, always-already, an entity, a being, an object. Not some vague whateverbeing, but a specific, unique being, as if the Levinasian il y a were an actual schizophrenic homeless guy sitting at the entrance to the church (the mise-en-scène of Coleridge’s “The Rime of the Ancient Mariner”). The proximity of an alien presence underwrites the nihilistic abyss of reason that Kant plunges thinking, the third dimension of inner space that opens at the very beginning of the Anthropocene, the infinity within, because I can know infinity, because I know I cannot count to it, which is precisely what infinity is.26 Just as kitsch, which is the aesthetic shit of the other, underwrites beauty, since in order to experience it, beauty must be, Kant argues, universalizable, that is, everyone should find this painting beautiful, but I shall not coerce you into finding it so. Thus someone else’s beautiful thing might just be your shit, ugly and slimy and crappy,
They Are Here
a shiny frosted Christmas ornament, a thing banished to the hinterlands of taste because it is precisely a thing. The unconditional, noncoercive givenness of beauty, which just is a projection of my reason, is predicated on a priori synthetic judgment, the third dimension beyond scholastic being that Kant opens up, that icy region of the mind’s Antarctic analogous to the nihilistic probings of earthly space of the quasi-imperialist ancient mariner himself. Yet this profoundly nonviolent being-with the beautiful thing is predicated on that thing’s existence, which threatens me beneath the abyss of reason with a horrifying uncanny agency. The car winks at me knowingly. The aesthetic sheen, the German Schein of beauty, is undergirded by the glistening uncanny that opens up in the chameleon shifting of the car. It is like watching a shaft of sunlight from another world illuminate that car—another world just behind your head, just next to the apparatus that produces the video, the other world that is precisely Eno’s camera gear, his broken tools sitting just to the left, just offscreen. This alien world is utterly intimate with this one, casting its strange light on it from a few inches away. Is this not more disturbing than the Kantian depths of inner space, more creepy even than the death of distant stars that we can detect if we push the radio telescope of speculative realism through the Kantian correlationist bubble? Because this thing here, this television image, is winking at me, sending out a sonar pulse from underneath nihilism, not in spite of it, but through the abyss of reason, through the darkness, as if the silence and stillness of these infinite spaces that fills one with Pascalian dread exists only because that silence and stillness are the sound of someone breathing down one’s neck two inches away from the back of the head: Like one, that on a lonesome road Doth walk in fear and dread, And having once turned round walks on, And turns no more his head; Because he knows, a frightful fiend Doth close behind him tread. (Coleridge, “The Rime of the Ancient Mariner,” part VI, lines 447–52)
Timothy Morton
Coleridge intuited it, at the very beginning of the Anthropocene. Poetry is sometimes a way to do philosophy when one lacks the thoughts. Inside the beauty is the a priori. Inside the a priori is the given. Inside the given is the thing, the object, this object, this actual cut-out piece of chroma-keyed video space, this space that is actually a substance, an oozing of photons and videotape. Ecological awareness, then, far from being the glib world of “We are all earthlings,” is the uncanny nightmare charnel ground realm.27 Where nothing is exactly, ontically, precisely real, and all the more real for that. The demonic quality of art disturbs the philosopher, who rightly wards off the aesthetic dimension as a domain of “evil.” It is evil to the extent that in order to be beautiful art, an object has already hypnotized us. We are always-a lready caught in the spooky agency, the shine of the headlights of the nonhuman has us under its spell. The very infinitude of reason’s abyss, our capacity not simply to understand but to think, is suspended in the demonic interspace between things, a dreamtime that pulses with eerie light. Why “evil”? Because we cannot tell whether it is lying or not: “What constitutes pretense is that, in the end, you don’t know whether it’s pretense or not.”28 Far from giving us a boring world of billiard balls that clunk in predictable, dull ways, OOO gives us a spooky world plagued with beings who may not be alive, who may not be intelligent—are we one of them? A kind of sort of world of good enough performativity, precisely because objects are real, not because they aren’t. This OOO world is one of necessarily incomplete Turing test data, because each thing is hidden in a cosmological version of Turing’s setup, behind the door of withdrawal, sending us sensual typescript about itself under the doorway. It is a claustrophobic world of illusion, precisely because there are real things. Powerful art thus threatens us from this slightly evil place, and in so doing, it is fully and meaningfully ecological, whether or not its explicit content is ecological. This evil is the mask worn by trickster objects as they pull us down underneath modernity’s nihilism into a “reanimist” universe of lies, traps, demons, and slapstick. We are pulled into this world not in spite of that nihilism, but underneath it, through it, out from under it like Danny in The Shining.
They Are Here
In the end, what is far more horrifying than absolutely nothing at all—the oukontic nothing—is the shifty, ghostly meontic nothing of nothingness, the pretense whose status as pretense one is unable to ascertain. When Skeeter empties his pockets in one of the video’s final scenes, it is as if he is pulling out this nothingness—he wants it to be oukontic: “See, I got nothing, there is not even nothing here!” Yet instead his pockets rebel against his human intention, as Skeeter pulls out his basic anxiety, in the face of the cigar-chomping hustler with his tricksterish hand-jive, as if instead to say, “What do you want from me?” Skeeter’s very pants subvert his wish to enter properly the human social symbolic order (Figure 7.5). As if the video were saying, “You are looking for aliens in the wrong place—they are in your pants pockets, they are under that car cover. They are that cover, hiding in plain sight. Right here, under your nose. Just behind you, a frightful fiend.” In a dizzying perspective shift, it turns out that the abyss that we took to be an abyss of reason, or perhaps of swirling matter, behind things, is actually in front of things—and not only this, the abyss is emitted by things, like radio waves. It is the abyss of causality, otherwise known as the aesthetic dimension. Givenness beyond the ontically given that I take to be real in a metaphysics-of-presence way. What is called Nature just is the reduction of things to their givenness for humans. This reduction must be policed, since it is inherently spurious and unstable. Instead we should look beyond nature, namely, beyond the beyond, to the things right in front of us, hiding in plain sight. They are here. Ecological awareness is always belated, finding oneself always already to have been inside something, inside a gigantic being called biosphere, a being that cannot be seen but can be inferred and computed with prosthetic cognizing devices. The affective equivalent of this gigantic being is the ocean of anxiety in which objects bob up and down in a menacing, Expressionist funk, halfway between clown and murderer. Acting evokes crime and art evokes evil: a magician with a cigar and a stash of money, a hip-hop dancer distorted by machines, a racist environment, a basketball, a car, a knife, exhaust from a factory smokestack. Facts lost, facts are never what they seem to be. I, an ocean of anxiety, dance around
Timothy Morton
Figure 7.5. In this still from the Talking Heads’ music video, Cross eyed and Painless (dir. Toni Basil), Skeeter Rabbit (left) pulls nothingness out of his pockets. The video narrates his character’s anxiety, a default condition of living in an ecological era. Reproduced by permission of MSA Agency.
you, another ocean of anxiety. Since they are always displaced from their appearance, strung out in and as temporality just like humans, do knives and basketballs themselves experience anxiety, the one emotion that never lies? They distort, they are distortion, as we have seen in the case of the car—that car, a not-car, that ocean of color. They are here. The car is still, waiting.
Notes 1. Cary Wolfe is the architect of a de-anthropocentrism that grounds racism in speciesism. See Cary Wolfe, Before the Law: Humans and Other Animals in a Biopolitical Frame (Chicago: University of Chicago Press, 2012), and Animal Rites: American Culture, the Discourse of Species, and
They Are Here
Posthumanist Theory (Chicago: University of Chicago Press, 2003), 1–20, 21–43. 2. A precedent for this argument can be found in Monique Allewaert’s account of “an emerging minoritarian colonial conception of agency” in Ariel’s Ecology: Plantations, Personhood, and Colonialism in the American Tropics (Minneapolis: University of Minnesota Press, 2013), 1. 3. Talking Heads, “Crosseyed and Painless,” Remain in Light (Sire, 1980). 4. Chögyam Trungpa, Glimpses of Abidharma (Boston: Shambhala, 2001), 74. 5. Emmanuel Levinas, Existence and Existents, trans. Alphonso Lingis, foreword by Robert Bernasconi (Pittsburgh, Penn.: Duquesne University Press, 1988), 45–60. 6. Sigmund Freud, “The Uncanny,” The Standard Edition of the Complete Psychological Works of Sigmund Freud, ed. and trans. James Strachey, 24 vols. (London: Hogarth Press, 1953), 17:218–52. 7. David Byrne, Christopher Frantz, Tina (Martina) Weymouth, Jerry Harrison, and Brian Peter George Eno, “Crosseyed and Painless.” Copyright 1980 WB Music Corp., Index Music, Inc., and E. G. Music, Ltd. All rights on Behalf of itself and Index Music, Inc. Administered by WB Music Corp. All Rights Reserved. Reprinted with permission. 8. Jacques Lacan, Écrits: A Selection, trans. Alan Sheridan (London: Tavistock, 1977), 311. 9. W. D. Richter, director, The Adventures of Buckaroo Banzai across the Eighth Dimension (20th Century Fox, 1984). 10. See for instance Graham Harman, Tool-Being: Heidegger and the Metaphysics of Objects (Peru, Ill.: Open Court Press, 2002). 11. Martin Heidegger, Contributions to Philosophy (from Enowning), trans. Parvis Emad and Kenneth Maly (Bloomington: Indiana University Press, 1999), 11. 12. Brian Eno, Mistaken Memories of Mediaeval Manhattan (Opal, 1981). I am grateful to Cary Wolfe for discussing this with me. See What Is Posthumanism? (Minneapolis: University of Minnesota Press, 2009), 290–93, 299. 13. See for instance A. D. O’Connell et al., “Quantum Ground State and Single Phonon Control of a Mechanical Ground Resonator,” Nature 464 (April 1, 2010): 697–703. 14. Edmund Husserl, Logical Investigations, ed. Dermot Moran, trans. J. N. Findlay (London: Routledge, 2006), 95, 98–99, 100–104, 110–11. 15. David Bohm, Quantum Theory (New York: Dover, 1989), 99–115. 16. Sigmund Freud, The Ego and the Id, trans. Joan Riviere, rev. and ed. James Strachey, intro. Peter Gay (New York: Norton, 1989), 24.
Timothy Morton
17. Paul Tillich, Systematic Theology 1 (Chicago: University of Chicago Press, 1951), 188. 18. Sigmund Freud, The Ego and the Id, trans. James Strachey Introduction by Peter Gay (New York: Norton, 1989), 24. 19. Jane Bennett, Vibrant Matter: A Political Ecology of Things (Durham, NC: Duke University Press, 2004), 119–20. 20. Sigmund Freud, Beyond the Pleasure Principle, trans. and ed. James Strachey (New York: Liveright, 1950), 32. 21. Richard Dawkins, The Ancestor’s Tale: A Pilgrimage to the Dawn of Life (London: Phoenix, 2005), 582–94. 22. Jean-Paul Sartre, Being and Nothingness: An Essay on Phenomenological Ontology, trans. and ed. Hazel Barnes (New York: Philosophical Library, 1984), 41–42, 61–62. 23. Ian Bogost, Alien Phenomenology or, What It’s Like to Be a Thing (Minneapolis: University of Minnesota Press, 2012). 24. Timothy Morton, Realist Magic: Objects, Ontology, Causality (Ann Arbor, Mich.: Open Humanities Press, 2013), 110–51. 25. David Bohm, The Special Theory of Relativity (London: Routledge, 2006), 189–90. 26. Immanuel Kant, Critique of Judgment, trans. Werner S. Pluhar (Indianapolis: Hackett, 1987), 519–25. 27. Sesame Street, “We Are All Earthlings,” Sesame Street Platinum All-Time Favorites (Sony, 1995). 28. Jacques Lacan, Le seminaire, Livre III: Les psychoses (Paris: Editions de Seuil, 1981), 48.
9 8 0
Form / Matter / Chora Object-Oriented Ontology and Feminist New Materialism
Rebekah Sheldon
The first decades of the twenty-first century have seen a number of challenges to the centrality of epistemology in literary and cultural theory, from the rise of neuroaesthetics and machine reading to the return of phenomenology and affect theory. Despite their diversity, these new paradigms reflect an ambient dissatisfaction with the ascription of causality at the root of the theoretical enterprise by putting pressure on the equation between apt description and social change. In their own ways, each questions the importance of representation, often through an implicit argument that the distinction between reality and its mediation is out of sync with the direct intervention into material life characteristic of current practices in science and technology. Taken together, these schools of thought represent a newly emergent realism in the humanities. Within this broad and interdisciplinary movement, two methods have attained particular visibility: speculative realism, especially object-oriented ontology (OOO), and new materialism, especially feminist new materialism. Indeed, their concurrent ascendency and their shared critique of representation have made it easy to understand them as versions of each other. As I endeavor to explain, however, this is a misapprehension. I look first at object-oriented ontology’s famous rejection of “correlationism” before turning to the very different history that animates feminist new materialism.1 The term correlationism derives from Quentin Meillassoux’s After Finitude, in which he defines it as “the idea according to which we only ever have access to the correlation between thinking and
9 193 0
Rebekah Sheldon
being, and never to either term considered apart from the other,” a characterization that Meillassoux applies to all philosophical approaches since Kant.2 For object-oriented ontologists, the effect of correlationism has been to dramatically limit the range of theoreti cal speculations to things that fall within human knowledge systems. As Ian Bogost explains in the introductory section of Alien Phenomenology: We’ve been living in a tiny prison of our own devising, one in which all that concerns us are the fleshy beings that are our kindred and the stuffs with which we stuff ourselves. Culture, cuisine, experience, expression, politics, polemic: all existence is drawn through the sieve of humanity, the rich world of things discarded like chaff so thoroughly, so immediately, so efficiently that we don’t even notice.3 This critique of correlation—t he “sieve of humanity” in Bogost’s terms—gives object-oriented ontology its grounding; however, its most characteristic gesture is not Meillassoux’s critique of correlation but Graham Harman’s notion of object withdrawal. The two work hand in hand, for OOO does not just turn our attention toward the nonhuman, it does so in order to postulate an emphatically anti-relational ontology in which objects recline at a distance from each other and from the networks in which they are embedded, very much including but not limited to human cultural practices. Profoundly threshholded, Harman’s objects stand at a remove even from themselves: internal organs are no less bounded objects than the bodies that house them, which are themselves distinct from what Timothy Morton calls “hyper-objects,” like climate.4 As Morton epigrammatically renders this point, it’s objects all the way down.5 This antipathy to relations in favor of the things themselves makes object-oriented ontology difficult to square with existing critical orientations considered broadly, but its relationship with feminism has been particularly rancorous—a push-me-pull-you of accusation and desire for affiliation that has generated both a new subfield, object-oriented feminism, and numerous heated denuncia tions, including the injunction apocryphally attributed to Isabelle
Form / Matter / Chora
Stenger’s keynote speech at the 2010 Claremont Whitehead Conference, to stop talking about object-oriented ontology.6 The antagonism between these two fields is in some ways easily understood. After all, feminism is historically constituted around human subjectivity, sexed specificity, and the sculpting effects of culture. Add to that the origin of feminists’ engagement with the sciences in a critique of scientific neutrality—a critique that argues quite precisely for the intercalation of culture between things and our experiences of them—and it becomes clear why the two fields have been wary of each other. A glance at more recent scholarship, however, suggests that this agon reflects as much a set of overlaps as it does divergences. Feminist new materialism in particular has moved away from the critique of neutrality and toward the recognition of the wholly nondiscursive agency of other-than-human forces. As Myra Hird and Celia Roberts explain in “Feminism Theorises the Nonhuman,” their introduction to their special issue of Feminist Theory, one important function of the “nonhuman” as an umbrella term to cover these new realisms is the way it calls attention to the myriad ecological, biological, and physical processes that have no truck with human epistemological categories whatsoever. “The majority of Earth’s living inhabitants are nonhuman,” they write, “and nonhuman characterises the deep nonliving recesses of the Earth, the biosphere and space’s vast expanse.” The world these nonhumans occupy “exists for itself, rather than for ‘us.’ ”7 For Hird and Roberts, this recognition prompts a critical modesty from out of which they seek to generate a realistic ethics attentive to the impact of human culture and also to the vivacity, vulnerability, and sometimes the surly intransigence of nature. The distance between object-oriented ontology and feminist new materialism, therefore, is not a function of the ostensible anthropocentrism of a feminism grounded in identity politics, as it might initially appear. Rather, I argue that their differences result from the radically different ways in which these two fields treat human knowledge systems. For as much as they both contribute to the critique of epistemology, the causal effects assignable to knowledge-making practices continue to be prominent in their divergent understandings of the role and form of scholarship. For
Rebekah Sheldon
object-oriented ontology, epistemology is epiphenomenal, a second- order representation whose range of effects is limited to human knowers. For feminist new materialism, by contrast, epistemology is an agent with directly material consequences. This account of epistemology is captured by the consistent use of doubly articulated phrases in feminist theory, such as Donna Haraway’s nature cultures and Karen Barad’s material- d iscursive intra- actions, phrases designed to collapse hierarchical dualisms and insist on the materializing force of broadly circulating ideas.8 This perspective is emphatically relational. It begins from the assumption that ideas and things do not occupy separate ontological orders but instead are co-constituents in the production of the real. It is possible, but mistaken, to read this conjunctive articulation as correlationist. Epistemology in these descriptions is not a sieve filtering out the material world and leaving only our relation to ourselves; rather, epistemology directly acts on material liveliness. For this reason, the term of art is matter. From Luciana Parisi’s abstract matter to Jane Bennett’s vibrant matter, feminist new materialism sees objects as a concrescence or intensive infolding of an extensive continuum. Matter draws together what appears separate and makes the totality subject to mutation and emergence.9 The ontological status of epistemology thus becomes legible as the result of the structuring privilege accorded to form in object- oriented ontology and matter in feminist new materialism. In this sense, the visibility and volubility of these two fields recapitulates the hoariest of philosophical binaries: the form/matter distinction. But this distinction is not only ancient. It is also central to philoso phy’s own sex/gender system, a system that feminist philosophy set out to reveal, disrupt, and overturn. In militating against the excesses of correlationism, object-oriented ontology and feminist new materialism restage an old antinomy—the being/becoming problem in Plato and Aristotle. My aim for this chapter is to force a confrontation between these fields. More specifically, I use the challenge of object-oriented ontology and its privileging of form as an opportunity to reconsider the fidelity to matter in new materialism. If this is a confrontation, however, it is less to argue for one side over the other—though I suspect my leanings will be clear—than to wrest from their jagged
Form / Matter / Chora
edges and odd overlaps a concept both sides have neglected despite its centrality to the form/matter binary they both invoke. I mean the chora, that uneasy third term that Plato, in his cosmogony the Timaeus, can neither smoothly systematize nor quite go without.10 To get there, I first turn back to an earlier moment in feminism and its texturing of body, nature, and science in order to come forward again, and through this rehearsal to underscore and swerve a vector that cuts from feminist science to the contested status of matter and process in current thought.11
Feminist Epistemology In “The Promise of Monsters,” her contribution to the 1992 field- mapping anthology Cultural Studies, Donna Haraway proposes as the first principle of a feminist cultural study of science the mutual constitution and discursive construction of science, nature, and culture. “I take as self-evident,” she writes, “the premise that ‘science is culture.’ ” She continues, “Nature is . . . a tropos, a trope. It is a figure, construction, artifact, movement, displacement. Nature cannot pre-exist its construction. This construction is based on a particular kind of move—a tropos or ‘turn.’ ”12 Nature cannot preexist its construction. This declaration might be taken as the motto for a particularly strong attractor in thought. As a method, its power comes from the way it affects an ontological-epistemological overturning. Where once we took nature as primary, the active partner whose inflexible laws order human social relations equally as much as they do the development of organs in a body or the rate of change of a falling object, now we reject this presupposition. We unmask it as performative ideology perpetuating what it claims merely to describe. We debunk it by finding contravening instances of internal differentiation. We neutralize it with historical variation. We trace it back to the archive and discover its ignominious origins. Against the racism, classism, sexism, and homophobia justified through recourse to an authorizing and law-giving nature, we do the good work of unveiling its construction. We learn to say “allegedly,” as in: the allegedly biological pettiness of women, the allegedly contra-naturam of homosexuality, the alleged unfitness of those sterilized for being
Rebekah Sheldon
feeble minded or nymphomanical or born with a tendency toward criminality. Against all this we say, There is no law of nature. Nature is a construction. Underlying this anti-natural, anti-biological orientation is a theory of representation, one that makes a particularly compelling sort of sense for professional readers of texts. Like the epistemology- ontology reversal that social construction effects, the characteristic gesture of this theory of representation is to invert common sense ideas about the operation of the senses. Thus Valerie Hartouni writes, “Seeing is a set of learned practices, a set of densely structured and structuring interpretive practices, that engages us in (re)producing the world we seem only to passively apprehend and, through such engagement, facilitates the automatic, if incomplete, operation of power.”13 In this passage, Hartouni urges us to complicate the naming theory of representation, that is, the notion that what representation represents precedes its representation, for which language merely provides a label. Or to put that more simply, she argues that what we think we know structures what we see, and not the reverse. Samuel R. Delany makes a congruent point using Leonardo da Vinci’s anatomical etchings of the child in womb. In “Rhetoric of Sex/Discourse of Desire,” the first essay in his collection Shorter Views, Delany relates to the reader descriptions of three anatomical drawings from da Vinci’s notebooks, one of a heart, one of the male urogenital system, and one of a fetus in a womb. Though these drawings have all of the characteristics of accuracy—“carefully observed, detailed, and rich in layerings” in Delany’s words—what they depict does not correspond with contemporary anatomical understandings. Instead of mirroring the real, they import specific, culturally circulating ideas about how that anatomy should look. In the example of the womb, what we now see as shaped like an upside-down pear shows up in da Vinci’s drawing as a sphere, corresponding to the “womb’s presumed perfect, Renaissance sphericality.”14 Delany takes this disparity as evidence of the distorting influence of culture, in terms highly reminiscent of the ones Hartouni uses in the passage above. Indeed, in the passage that closes this section of the essay, Delany makes clear that the purpose of this
Form / Matter / Chora
anecdote is to provide a definition of discourse: “It is the til-now- in-our-tale unnamed structuring and structurating force that can go by no better name than ‘discourse,’ ” he writes, “and that force seems strong enough to contour what is apparent to the eye of some of the greatest direct observers of our world.”15 As this suggests, discourse isn’t simply speech, but the syntax of assumptions that “contour” perception through their self-evidence—precisely for the way they appear identical to nature. For literary theorists, the resulting privilege was awarded less to knowledge-making practices per se— for it isn’t really knowledge that causes us to see an apple-shaped womb where none exists—than it was to the sort of emotionally inflected, repetitive logic or structure generated by stories. This kind of storytelling is not limited to fiction. One of the most important moves in feminist epistemology was the assertion made by feminist philosophers and scholars of science that notions of gender adhered in and were circulated by fields whose content was ostensibly distant from gender, or whose inclusion of gendered metaphors was merely coincidental or ornamental and therefore unnecessary to the substance or import of the theory. Thus in Speculum of the Other Woman, Luce Irigaray critically mimes—reveals through repetition and forced conjunction—the philosophical systems of Plato and Freud in an effort to show how central gender always already was to their allegedly neutral rational systems.16 And not just the center but also the circumference, suspended in what Giorgio Agamben would later adumbrate as the constitutive exclusion, or what Irigaray diagnosis as the mute, passive substrate of a discourse that constitutes itself on women’s exclusion.17 It is no mistake, Irigaray contends, that women are caught up in relations of metaphor with the Earth, figured as the exploitable source of nurturance, with the womb as the origin from which manly activity proceeds, with nature as unruly, irrational primitive abundance, and with matter sculpted and stamped by transcendent form.18 Far from accidental, these gendered topologies provide the coherence of these philosophical systems and coordinate a fungible chain of analogies and metonymies. Thus the rhetoric of nature was held in suspicion not only for the way it was used to justify disenfranchisement and oppression. The dream of nature’s unmediated transparency, itself a version
Rebekah Sheldon
of what Haraway calls the “paradise myth,” was also the antagonist in the hard-won recognition that representations of ostensibly empirical phenomena tell us more about ourselves than they do about the things they putatively passively reflect. And it is here that the critical enterprise staked its ground in a refrain that is still prevalent amongst humanists and social scientists working today: change the story, change the reality—a task that begins with the drama of exposure.19 In the wake of women’s long reduction to earth, matter, nature, and origin, the foundational exposure was of culture behind the curtain of the concept of nature itself.
The Feminist Critique of Correlationism: Matter Between these various examples, a problem develops. For it is one thing to say, as Hartouni does, that experience emerges out of structures of knowledge, or that ideation is founded on and perpetuates norms of sex and gender, as Irigaray does, and altogether another to say that the real is there under our distortions, waiting for the scales to fall from our eyes. Doing so pictures a Manichean split between an obscured real and a tendentious, deluded realm of human subjectivity. That we have no access to the real then forces us to confine our interventions to the level of culture and in so doing repeats the distinction between the activity of culture and the mute inscrutability of nature, a distinction whose consequences for women’s lives feminist philosophers have made palpably apparent. It is this version of cultural theory, often called social construction, that Meillassoux’s “correlationist codicil” (AF, 13) accurately describes. As Meillassoux sees it, the problem with correlationism is that it can never affirm the independent existence of the real world. For the correlationist, no technology of empiricism—from logical argumentation to the techno-scientific apparatus—emerges without having the distorting influence of cultural assumptions built into its design. For this reason, the correlationist relates every thing back to human knowledge systems. This is the famous “ancestrality” problem: Consider the following ancestral statement: “Event Y occurred x number of years before the emergence of humans.”
Form / Matter / Chora
The correlationist philosopher will in no way intervene in the content of this statement. . . . She will simply add . . . something like a simple codicil . . . : event Y occurred x number of years before the emergence of humans—for humans (or even, for the human scientist). . . . Accordingly, when confronted with an ancestral statement, correlationism postulates that there are at least two levels of meaning in such a statement: the immediate, or realist meaning; and the more originary correlationist meaning, activated by the codicil. (AF, 13–14) As I have endeavored to demonstrate, this “shifting holistic world of interrelated significances” is precisely the prophylactic that social construction set up against the imperialism of natural law.20 Working alongside the social constructionist understanding of discourse as a veil obscuring the brute reality of material stuff, however, is a different tradition. Karen Barad in her “Posthumanist Performativity” starkly frames feminism’s own internal critique: “Language,” she writes, “has been given too much power.”21 In its bifurcation of a representational regime to which we have access and a material reality that regime both reflects and occludes, linguistically based approaches share a structure of belief with the positivism they sought to overturn through the shared conviction in the inherent properties possessed by the stuff of this world. Significantly, Barad’s alternative is not to return to naïve realism but rather to bring forward the crucial and hard-won link feminist science studies scholars forged between epistemology and materiality by asking after the materializing effects of discourse. Barad aligns what she calls “agential realism” with performativity. Using the work of physicist Neils Bohr, she offers an account of performativity that highlights its material reality: “Apparatuses are not static arrangements in the world that embody particular concepts to the exclusion of others; rather, apparatuses are specific material practices through which local semantic and ontological determinacy are intra-actively enacted.”22 For Barad, in other words, matters and discourses are co-constituting, and so asking what knowledge does is always a matter of asking after its ongoing entanglements.23 Indebted to Foucault’s account of disciplinary apparatuses and
Rebekah Sheldon
to Bruno Latour’s actor-network theory as well as the physics on which she bases her analysis, agential realism insists that properties are the products of local intra-actions between actants of many kinds. Or, to put it epigrammatically, for Barad relations precede relata, which then alter relations. And properties, which we commonly understand as the possessions of individuals, are instead emergent features of entangled phenomena. The point here is that intra-actions are live. Barad coins the term intra-action to undo the implicit understanding of interaction as the meeting of two already-formed objects. Rather, intra-actions instantiate boundaries anew. This has been a particularly productive approach for a feminist reading of science attentive to the fine- grained plasticity of bodies and the weird ontologies of physics. For example, in her The Body Multiple, Annamarie Mol urges us to move away from the question of how medicine knows the body to ask instead how medical practice enacts the body. She looks at the treatment of heart disease and the distance between a reductionist discourse of illness and the actual practice of stimulating, containing, molding, and redirecting entities: “Which entity? A slightly different one each time.”24 In The Origins of Sociable Life, Myra Hird gives this same interest in subindividual liveliness the name microontologies and uses it to examine the intra-active instantiations of bacteria, sex, and human sociality.25 Stacy Alaimo’s term is transcorporeality, a term that she uses in Bodily Natures to consider the constitutive openness that enables the systemic complexity of the environment to produce system-wide catastrophe.26 That epistemology and ontology are linked—that what we know sculpts how we act—is our legacy from social construction. That matter also acts and in sometimes unexpected ways is the contribution of feminist new materialism and its difference from correlationism. For feminist new materialism, the solution to the problem of women’s historical assimilation with nature, matter, earth, and origin—the problem that led social constructionist feminism to culture—is to sidestep the essentially ideological use of these terms.27 The world they find in its stead is rich, strange, and as transformative for our understandings of sex and gender as it is for our conception of time, space, matter, and individuality. It is perhaps tendentious but nonetheless true that these works were
Form / Matter / Chora
published—and certainly discussed and circulated—in the same period as, if not prior to, Meillassoux’s After Finitude, Harman’s Tool-Being, and the collectively edited Speculative Turn book in whose introduction Levi Bryant, Nick Srnicek, and Harman assimi late all contemporary thought as anti-realist: “In this respect,” they write, “phenomenology, structuralism, post-structuralism, deconstruction, and postmodernism have all been perfect exemplars of the anti-realist trend in continental thought.”28 They continue: The first wave of twentieth century continental thought in the Anglophone world was dominated by phenomenology, with Martin Heidegger generally the most influential figure of the group. By the late 1970s, the influence of Jacques Derrida and Michel Foucault had started to gain the upper hand, reaching its zenith a decade or so later. It was towards the mid-1990s that Gilles Deleuze entered the ascendant, shortly before his death in 1995, and his star remains perfectly visible today. But since the beginning of the twenty-first century, a more chaotic and in some ways more promising situation has taken shape. While it is difficult to find a single adequate name to cover all of these trends, we propose “The Speculative Turn” as a deliberate counterpoint to the now tiresome “Linguistic Turn.”29 The absence of women from this story of succession is remarkable both for its casual and apparently unwitting embrace of patrilinea tion, but also, and more incisively, for the distortions it relies on to produce such a clean line of descent.30 For the production of a monolithic and homogeneous “linguistic turn” in the humanities is only possible through a constitutive misreading of Derrida, Foucault, and Deleuze and from the strategic exclusion of the work done in feminist science studies for the past two decades. As I hope to have demonstrated, it is in fact difficult to find a moment in feminist science studies when questions of embodiment, nature, science, realism, and referentiality were not explicitly at stake. This then is the true intervention of object-oriented ontology: the way its forces new materialism together with social construction under the heading of correlationism. The assimilation
Rebekah Sheldon
of these fields—understood by their own practitioners as critical antipodes—has generated a sharp shock to criticism that has in turn allowed the middle ground and its array of objects to stand forth. And yet, against this critical tide, it is my contention that matter has not yet been given full rein to generate new methodologies for critical theory. The assimilation of feminist theories of matter with cultural construction elides the way that matter functioned as an internal critique of cultural construction, one that sought to retain the link between epistemology and materiality while also arguing for the autonomy and wayward agency of the extra-discursive world—or rather, their discontinuous co-modulations. Assimilating matter as the reverse side of cultural construction through the auspices of the correlationist’s “co-” obscures the ways in which feminist new materialists have sought to inhabit the concept of matter as a site in which to build a materialist account of complex causality within open systems—one that adheres neither at the level of a closed totality nor from the perspective of the atomized individual but rather as a trans-individual assemblage whose motions are greater than the sum of its parts.
The Object-Oriented Critique of Correlationism: Form It is no mistake, I am arguing, that the Speculative Turn volume in mapping the terrain from which object-oriented ontology emerged wrote out the history of feminist inquiry or that under the auspices of Mellaissoux’s correlationism the critical antipodes of feminist new materialism on the one hand and social constructionism on the other, a particularly knotty and long-winded debate within feminism itself, get collapsed.31 This collapse, I contend, is a way of confining the energy of those concerns, the propulsion that they produced for thinking the nonhuman, to a discredited discourse whose energy it can then usurp. OOO has been so provocative for feminist theorists because of its cannily unknowing usurpation of the energies of feminist thought and its relegation of that history to footnotes within its own autobiography. This, however, would not rise much above the level of rhetoric were it not the case that the process of usurpation and confinement is also characteristic of OOO’s objects, which gain their
Form / Matter / Chora
weird vitality by siphoning off and rendering sterile the plenum of non-object-bound energies. For as much as both fields have been hailed under the heading of materialism, OOO is as antimaterialist as it is anticorrelationist. In fact, Harman explicitly defines object- oriented ontology as “the first materialism in history to deny the existence of matter” (TB, 293), and its denigration of matter is foundational and pervasive. Tim Morton calls philosophies that favor concepts of flux, flow, process, pattern, and contingency over notions of stability, essence, solidity, interiority, and permanence “lava lamp materialism.”32 Bruno Latour, writing in his “Compositionist’s Manifesto,” cites Alfred North Whitehead’s argument that matter is an Enlightenment idealization.33 The problem with matter for object-oriented ontology is that it allows us to skip over objects by seeing through them to the substratum that bears them, a process Harman calls “undermining” as opposed to what he calls the “overmining” of objects through historical contextualization or epistemological reading. The account of the object that animates object-oriented ontology sees the object as an eternal form, which can neither be sculpted by discourse nor obscured by representation because it is vacuum-sealed against these realms. What object-oriented ontology offers is the thing in itself—not the historical conditions of its emergence, not the meanings it circulates, but the object and its qualities in their tensile interrelations. But it is not a naïve object that they offer. For Harman, the object’s resistance is a consequence of its general withdrawal from relations, including human perception. The object in OOO's sense is a “black box, black hole, or internal combustion engine releasing its power and exhaust fumes into the world.”34 Those fumes, or “plasma” as he refers to them in another instance, are the object’s qualities, which are always limited, partial, multifarious, and brought into meaning-systems differently in different periods and cultures.35 Despite this, the object retains its unity in the dignity of its seclusion. Its being does not bear on its meaning nor vice versa. “Objects,” he writes, “are autonomous from all the features and relations that typify them, but on the other hand they are not completely autonomous.”36 He calls on us to give an account of this ambivalent, semi-detached object. An example might help to clarify this matter-less ontology. In a
Rebekah Sheldon
thought experiment from his 2002 Tool-Being, Harman asks where we should look to find matter. He takes as his privileged instance a children’s amusement park ride, the Ferris wheel. It is worth noting that this form of exemplification is typical of Harman’s prose. He is less interested in the workings of a particular sphere—quantum physics in Barad’s case, symbiogenesis in Hird’s—than in using objects to adumbrate the workings of his philosophical system. The Ferris wheel, he argues, can be broken down into “numerous bolts, beams, and gears in its mechanism” (TB, 293). These pieces that were the Ferris wheel are now something different—bolts, beams, and gears. If they are recomposed in the production of a piece of public art, for example, they will once again be subsumed. But there is never a moment of indeterminacy between these positions when the bolts, beams, and gears return to a primordial state of undifferentiated matter. “Above all else,” he concludes, “the ‘parts’ in question here are form, not matter. . . . What is real in the cosmos are forms wrapped inside of forms” (TB, 293). For Harman, even the relation between parts is a form, the pulley-and-lever system of the Ferris wheel forming a little machine whose functioning is separable from and in excess of the withdrawn objects (bolts, beams, and gears) that compose it. Because they are withdrawn, their potentiality is not exhausted. They can go on to take part in the machine of public art, or the assemblage called the dump. Wherever they wind up, they are still productive because they are still withdrawn. In contra distinction to materialism, he calls this “formalism” (TB, 293). The Ferris wheel argument has the force of common sense. As Harman reminds us, we rarely think of the atoms of iron that compose an amusement park ride before we get on it, still less if our aim is to dismantle the ride and haul it off to the junkyard. Even if we were to get down to the infinitesimally small, however, we would still find forms that are separable from the totality that they compose, little machine atoms whose relations never exhaust the capacities of the individual entities that compose them to engender other relations. The torque on a bolt may distort its shape and strip its thread, weather may rust its metal—qualia, in other words, may change—but the bolt-qua-bolt remains unperturbed even when it is little more than a piece of broken, rusty metal. “If an entity always holds something in reserve,” Harman writes, “and if this reserve
Form / Matter / Chora
also cannot be located in any of these relations, then it must exist somewhere else” (TB, 230). That somewhere else is in the unapproachable “molten core” at the center of the withdrawn object.37 For Steven Shaviro, the withdrawn quality of objects makes object-oriented ontology a version of substantialism, a claim that Levi Bryant embraces in his Democracy of Objects.38 Drawing from Aristotle’s account of substance, Bryant underscores Harman’s essentially dualistic notion of the object as composed of virtual proper beings (or substance) and local manifestations separate from that substance.39 Consolidated inside of the object, substance maintains its perforations whatever permutations its properties might exhibit; no cut or entanglement ever penetrates deeply enough. Rather than Aristotle’s view of substance, then, I suggest that object-oriented ontology’s split object recalls Plato’s account of form and his foundational distinction between that which “always is and has no becoming” and that which “comes to be and never is” (Timaeus, 58C). For Plato of the Timaeus as for Harman, the substance of an object never changes, subsisting always in a self-same condition of being, while its accidental qualia and exogenous relations are alone capable of becoming and perishing away again. And much as it did for Timaeus, this division between “that in which [a form] comes to be, and that from which what comes to be sprouts as something copied” (emphasis in original, 50D) requires a disruptive third term—the receptacle, chora, or womb, neither in the realm of being or becoming—whose evident deployment of sexed metaphors is very much to the point. Yet rather than retreat from the chora and its troubling sexual politics, I follow Judith Butler’s lead in pushing the notion of the chora beyond the bounds of the systems that contains it. This potentiality, I argue, is generated from the stalled antinomy of object-oriented ontology and feminist new materialism—immanent in them both, but fully pursued by neither. The next section explores some of the limitations of matter in feminist new materialism.
The Persistence of Things Harman’s repudiation of matter reveals a surprisingly similar tendency in feminist new materialism. The examples we have reviewed
Rebekah Sheldon
of the performative co-constitution of bodies and discourses make rigorously concrete the way meaning matters. However, their very specificity privileges the demonstrable. As we saw in the Ferris wheel example, matter requires a willingness to entertain that which escapes the procedures of demonstration. For Harman, the very inability to put one’s finger on matter proves its inexistence. But the recession of matter in demonstration afflicts feminist new materialism as well. I want to turn now to an example given to us by the political philosopher Jane Bennett in her Vibrant Matter, because in using matter to think about politics, Bennett’s work exhibits its brilliance and also underscores its limitations.40 Bennett calls her approach a “vital materialism” and defines it as an attempt to “dissipate the onto-theological binaries of life/ matter, human/animal, will/determination and organic/inorganic” (VM, x) in order “to enhance receptivity to the impersonal life that surrounds us and infuses us” (VM, 4). To exhibit the value of this approach, she takes up the example of the 2003 North American Blackout. “To the vital materialist,” she writes, “the electrical grid is better understood as a volatile mix of coal, sweat, electromagnetic fields, computer programs, electron streams, profit motives, heat, lifestyles, nuclear fuel, plastic, fantasies of mastery, static, legislation, water, economic theory, wire and wood” (VM, 25). Together, this assemblage of actants produces something else—literally volatility—through the mattering of power, or rather through the production of its failure. Irritable, jittery, tending toward change: this vision provides a clear and recognizable example of how fantasies of mastery solicit the excitability of electricity. Unlike social construction, the emphasis here is on the agency of power as it succumbs to and exceeds human management systems. No longer Delany’s spherical womb and its emphasis on what the eye can see, it is because bodies are restive, not at all quiescent, that they can be bound and shaped to particular forms. At the same time, however, Bennett’s actants remain visually distinct on the page, both preceding and succeeding the relations that brought on the power failure, in a visual echo of the trademark listing style of OOO. These lists stand in metonymically for the randomness of the object world. Indeed, Bogost built his
Form / Matter / Chora
famous Latour Litanizer specifically to generate discontinuous sets of things.41 Although Bennett’s list is oriented toward relationality since her point is that each piece is entangled in an emergent phenomena with all the others, the list form itself highlights the separability of the objects it houses. What is missing from her list is the volatility as a quality of relations rather than of objects. Bennett wants to argue that the power failure, like Nietzsche’s famous discussion of lightning in On the Genealogy of Morals, is not the product of the grid but instead one of its potential expressions. Yet it is exactly that emergent property that slips the noose of the list, operating in the gaps between its actants. Bennett and Barad both insist that there is life in the interstices, an inorganic life that moves as vigorously through the biological as through the machinic and the ideational. My purpose in this very brief critique has been to show how easily the apprehension of that life recedes under the requirements of demonstration to be replaced by a network of discrete parts. Through the analytic of the network, Bennett can present the North American Blackout as a seamed whole rather than as a unified totality and thus avoid the pitfalls of essentialism. That very presentation, however, obscures what Hasana Sharp characterizes as the “supersaturation” of any system with “energetic force that is composing and recomposing in new forms, in response to new tensions, at all times.”42 More pointedly, the absence of a lexicon for apprehending that supersaturation then impoverishes our inquiry into its modes of composition and conceals the ways in which we enter into that composition. The question, then, is not just of forms and matters, but of the space that holds them both. The persistence of things in feminist new materialism reveals the enduring difficulty of articulating a space or plenum that is also a dunamis. The more we see the former, the more obscured the latter becomes and vice versa. As Eugene Thacker writes, the contradiction between “an immanence that is placid, expansive, and silent, and a vitalism that is always folding, creating, and producing” appears irresolvable.43 In rejecting materialism, what object-oriented ontology refuses is the possibility of a dynamized space between objects. By the same token, the persistence of things in feminist new materialism indicates a
Rebekah Sheldon
hesitancy to fully name and defend such a potentially irresolvable problematic. Yet this problem also reveals an enduring desire to collapse the distinction between plenum and dunamis. Indeed, the language of spaces and forces evident in Sharp’s “supersaturation” runs like a minor chord through the critical enter prise, subtending even apparently epistemological or linguistic accounts. It is evident, for example, in Foucault’s discussion of his method in History of Sexuality, volume 1, where he writes: It seems to me that power must be understood in the first instance as the multiplicity of force relations immanent in the sphere in which they operate and which constitute their own organization; as the process which, through ceaseless struggles and confrontations, transforms, strengthens, or reverses them; as the support which these force relations find in one another, thus forming a chain or a system.44 Foucault’s injunction can be and very often has been understood through the logic of the dispositif—the meeting of powers converging on each other via molar organizations, regulatory apparatuses, legal frameworks, discursive networks, and their various idioms and appurtenances—and for good reason.45 Paying attention to the nodes that make up the “chain or system” operating within a sphere, as the dispositif model holds, however, has the effect of equating relations of force with exertions of power, collapsing effect into cause and writing out the concussive meetings between nodes. What it means for “a multiplicity of force relations” to “constitute their own organization,” in other words, changes substantially if we take “force relations” on a physical rather than a sociological model.46 In such a model, relations of force cannot be reduced to the actions of entities. In Manuel De Landa’s account, the force relations of complex systems are the attractors and bifurcations that are both intrinsic features of dynamic systems and yet that “have no independent existence” and therefore cannot be understood as products of molar objects in any straightforward sense.47 It is for this reason that Gilles Deleuze calls Foucault’s thesis on power a “physics of action.”48 Yet even in De Landa’s alternative conceptualization of a “single phylogenetic line cutting through all matter,
Form / Matter / Chora
‘living’ or ‘nonliving,’ ” the presence of terminology from evolution indicates the ease with which dunamis can become reconsolidated as a property of bodies that recline in the passive plenum of space.49
The Chora I propose the chora as a way to grasp this dynamized space. The chora comes to us from Plato’s Timaeus, which tells the story of how the temporal world arose from the eternal.50 As this implies, much of the text concerns the split between the eternal world of forms—unchanging, apprehensible by the intellect but without sensuous equivalent—a nd the world to which we have access. Timaeus’s account of the chora comes in the middle, breaking into the smooth flow of its narrative line. Timaeus finds that he can no longer make do with the two kinds—the realm of being and the realm of becoming—that have until this point satisfied. He runs into the necessity for that which should not be necessary, the product of “bastard reasoning . . . hardly to be trusted, the very thing we look to when we dream and affirm that it’s necessary somehow for everything that is” (emphasis in the original, 52A). The problem is this: To move into the temporal world of becoming, the transcendent form must have “birth and visibility” (50D). Eternal models must become imitations of themselves. If this is so, then the form must have something into which it descends, something separate from the copies that it will generate and that make up the temporal world. Since eternal forms cannot enter the realm of becoming, yet must put its impress into substance, then there must be a third realm. Form must be housed somewhere in something while it undergoes its transformation. To correct this difficulty, Timaeus conjures up a third kind, neither a model nor a copy, neither being nor becoming: the chora or the space of genera tion. Explicitly framed in hetero-reproductive terms, the chora is “mother,” “womb,” “wet-nurse,” and “receptacle” (48C–50E) to the fathering form. The eternal form enters into the chora but takes nothing of her nature. She serves wholly as the space of transmission. Yet it is not for nothing that the chora is introduced late: it is both necessary and inassimilable, disrupting the distinction between being and becoming by taking part in both but being faithful
Rebekah Sheldon
to neither. Where, after all, did this third realm come from? Part of neither kingdom, the chora is the “wandering cause” (48C) that holds together and disrupts the movement from potentiality to actuality, swerves the smooth transition from model to copy and offers a notion of systemic agency that operates in the interstices between objects. In her reading of these passages, Judith Butler highlights the passivity and shapelessness Plato assigns to the chora and that marks its difference from the active shaping of the “father” or the eternal form and the degraded, inauthentic activity of the copy in the world of becoming. As wandering cause, the chora holds the potential for uncontrolled generation, for dynamic change that is neither a product of the eternal form nor its diminution in the realm of becoming. While giving place to the spontaneous genera tion of new orders, the Timaeus’s family romance thus represents an attempt at domestication in what Butler calls a “topographic suppression”: from matrix to place, from chora to plenum, from wandering cause to wet-nurse.51 This topographic suppression is precisely Harman’s move. He consolidates relationality and potentiality to the interior of the object. In so doing, he brings the chora into the object, providing each real sensible thing with its own internal generator, “the molten inner core of objects.”52 This, I suggest, represents a new kind of domesticization of the chora. No longer passive midwife of sensible form, dynamism is now locked in as the engine of form, vitalizing its limits and thresholds. What would it look like to release the chora from this topographic suppression? At one point, Butler toys with the idea of what she calls an “irruptive chora” but ultimately rejects it as a false escape.53 And yet this irruptive chora, it seems to me, is just what Plato’s domestication sought to avoid and yet could not quite do without. As such, it offers an opportunity to imagine an autonomous, dynamic, temporalized space through which subindividual matters, vibratory intensities, and affects might cross and be altered through that crossing. This is the crucial point. The irruptive chora enables us to apprehend with what frequency the plenum or spatium is posed as passive, even in new materialist writing. My description of the chora, for example, is mostly consonant with Deleuze and Guattari’s notion of the body without organs. In light
Form / Matter / Chora
of the irruptive chora, however, their description of “the unformed, unorganized, nonstratified or destratified body . . . [that] causes intensities to pass” seems strangely passive.54 The contradiction between plenum and dunamis appears logical precisely because it begins from an originary cut between the given and the immutable and the contingent and mutational. A resurgent, vitalized understanding of the “sphere,” the “support,” the “chain or . . . system,” “the moving substrate” in Foucault; the “spatium” in Deleuze and Guatari; begins to suggest a way back to the chora in its activity, to inhuman reproductions, to an irruptive chora that exerts its own autonomous force. What I am proposing bears resemblance to what Pheng Cheah, in a discussion of Deleuzian virtuality, introduces as a double articulation between the virtual “speeds and intensities” that generate an actual object and the object itself. This causality goes in both directions: “On the one hand,” Cheah writes, “the actual object is the accomplished absorption and destruction of the virtuals that surround it. On the other hand, the actual object also emits or creates the virtual since the process of actualization brings the object back into relation with the field of differential relations in which it can always be dissolved and become actualized otherwise.”55 The dualism Cheah proposes between the actual object and its virtualities creates a separation between the background of speeds and intensities that get captured by the drag of organization and sedimented into an actual object whose constrained vibratory intensity then ripples back across the field of force relations. Such an analytic could then ground a physics of force and a method that accounts for its operations. Taken in its most robust form, this revivified chora generates an ontology of material-affective circulation. As a tertium quid, the “third thing” that transmits and transforms dynamic form, the chora both enables and distorts the autopoiesis of apparently incorporeal matters like thought. As Brian Massumi writes, “No longer beholden to the empirical order of the senses, thought, at the limit, throws off the shackles of reenaction. It becomes directly enactive—of virtual events.”56 Thinking of representational media within the terms enabled by this analysis of the chora gives them autonomy from their human reception: they are autonomously mobile, subject to the same patterned movements as
Rebekah Sheldon
those that characterize physical and biological systems, and they are autonomously causal, interacting with each other to engender new vanishing points. In the final section I point to some ways of thinking the practice of choratic reading.
Practicing Choratic Reading “Something’s doing”: the opening words of Massumi’s Semblance and Event capture the premise of choratic reading and its point of intervention. Two decades of critical scholarship in feminist materialism and science studies have made it possible to say “There’s happening doing” and to indicate by that phrase the agency of human and nonhuman bodies, organic and nonorganic vitalities, discourses and the specific material apparatuses those discourses are.57 This critical posture has also begun to redound on the practice of scholarship itself. From Eve Kosofsky Sedgwick’s largely conceptual call for a greater scholarly embrace of the sensual and reparative to object-oriented ontology’s desire to “go outside and dig in the dirt,” the questions of scholarship’s own affective and material basis have started to receive serious treatment.58 In How We Think, for example, N. Katherine Hayles argues for a “practice- based research” that would employ embodied interactions with other-than-verbal materials to generate unexpected kinesthetic and temporal experiences.59 In similar fashion, Erin Manning and her collaborators run the SenseLab as a center for research crea tion by emphasizing what they call “the active passage between research and creation” in real-time and asynchronous collaborations.60 Each of these examples widens and diversifies the modes in which scholarship gets made. By arguing that how we make things affects the things we make, Hayles, Manning, and Bogost demonstrate that the study of matter, affect, and embodiment need not and should not take place primarily through the study of texts but can instead be theorized through and as practice. This laboratory model of criticism incisively shifts the scene of production. Operating on the other side of this relationship, scholars working in the digital humanities and in rhetoric have begun to develop alternatives to the linear essay. However, with notable exceptions, these emergent models—such as data visualization and
Form / Matter / Chora
other forms of machine reading—retain demonstrative argumentation as their primary goal.61 Although machine reading allows us to find patterns at scales otherwise impossible, the questions and conclusions we bring to bear on these patterns have remained largely congruent with close reading’s persuasive demonstration. By contrast, if choratic reading can be said to have an argument, it is that new concepts arise as much through affective engagement as through rational demonstration. As one of the forces behind this shift, it would seem reasonable to assume that object-oriented ontology would proffer a new way to navigate the relations between cultural production, aesthetic form, and subject formation. Indeed, OOO has been called on to perform just such an analysis and Harman has duly responded. In his “Well-Wrought Broken Hammer” article published in New Literary History, Harman reviews examples of new criticism, new historicism, and deconstruction before positing his own “object-oriented method” (WWBH, 200). What he finds in his review is that each school of criticism fails in the same way that he understands pre-OOO philosophy to have failed. By “dissolving a text upward into its readings or downward into its cultural elements” (WWBH, 200) criticism never lands on the text itself.62 Instead of these two procedures, Harman urges literary critics to work toward discovering qualities that make works of literature themselves. “Instead of just writing about Moby Dick,” he writes, “why not try shortening it to various degrees in order to discover the point at which it ceases to sound like Moby Dick? Why not imagine it lengthened even further, or told by a third-person narrator rather than Ishmael, or involving a cruise in the opposite direction around the globe?” (WWBH, 202). The purpose of rendering such permutations is to discover what in the text is accidental—the qualia of the work— and what makes up its molten interior. As a method, then, its mode is primarily taxonomic, a categorizing enterprise with the added capacity to discriminate between the features of a work that allow it to “withstand the earthquakes of the centuries” (WWBH, 201) and those that prove irrelevant to its essential haecceity. Harman thinks of the new critics as having come closest to this goal in their focus on the individual work, and indeed such a method goes quite a ways toward weeding out of literary studies the very assertions that
Rebekah Sheldon
have made it a radical and generative field since the new critics: that texts speak more than they know, that the devil is in the weave of its details, that repetitions across texts are meaningful and therefore interpretable. None of this is really very surprising, of course.63 Yet several aspects of this method do surprise. It is first of all surprisingly epistemological. As we have seen, one of the defining features of object-oriented ontology’s divided objects is their inaccessible interiors that repel “all forms of causal or cognitive mastery” (WWBH, 188). Starting from this definition, it is not at all clear that any amount of cutting or rearranging will bring the inner core of Moby Dick into relief. Nor is it clear what beside “causal or cognitive mastery” this act would contribute. Moreover, for a field that trumpets the democracy of objects, Harman’s object-oriented method is singularly unimpressed by the force of criticism. Cutting into the text and recording the results frames the work of literature as primary and the work of criticism as merely adjunct reportage. Finally, and ironically, the examples Harman provides all focus on the text as narrative plot. The text’s formal composition, its “structure of meanings” (WWBH, 190) is excluded from the outset as a clearly false prejudice of new criticism. With no relations inside or outside, object-oriented method reveals the immobility generated by interiorizing dunamism within a sterilized plenum. What object-oriented method cannot see is the material force of literature as it enters into composition with other vibrant matters. Choratic reading, by contrast, begins from the assertion that acts of literature—very much including scholarly readings—are performed in material composition with the affordances of their media, the sensorium of their audiences, and the deformations of dissemination as they transduce across and are deformed by the irruptions of the choratic plane. In this sense, books and their readers form zones of intensity in composition with the interstitial field of already-circulating energies and the attractors and bifurcations of the choratic plane. The political purpose of interpretation informs the form of the reading as well as its content. Sedgwick has written at length about the unacknowledged affective register of hermeneutics. Rather than restricting the import of this affection to readers, choratic reading practices opens scholarly affect to flows of all kinds. By matching the affective milieu of its object,
Form / Matter / Chora
choratic scholarship can underscore, extend, thin out, modulate, or swerve its circulation. Such a reading protocol transforms our angle of inquiry from the text as representational symptom to the dynamic form of the media object. To effect this transformation, I propose a lexicon of terms: composition highlights the interrelations between parts; movement refers to the characteristic circuiting of energy through that form; sound to the layers or tonal stacks striating it; rhythm to the vibratory milieu created by it; and, finally, gesture looks to the capacities for connection and the production of potentialities. Together, these properties illustrate the internal workings of a form and its relations to the aesthetico-political milieu. Employing this method of analysis allows us to pose questions about texts that concern their effects as form. In this context, we can ask what shape the text creates and how that shape circulates affective force, how it moves across time and how it forms relationships: from explosive forms meant to blast apart overly strong captures to shapes bristling with receptors or catalyzing shapes meant to actualize potentiality in encounters. Reading in this way emphasizes design and so alters the formal distinctions between creative and critical compositions. “What does this have to do with science?,” Elizabeth Grosz asks in the course of her little monograph Chaos, Territory, Art. As I hope I have demonstrated, feminist science studies has traced an arc through the question of representation, moving from the urgent and necessary epistemological task of untangling science’s encoding of and complicity with sexed, gendered, raced, and anthropocentric assumptions through the hard-won and hard- maintained insistence on the ontological co-constitution of matters and discourses to the recognition of the wholly non-discursive agency of other than human forces. The question of representation, of the relations between things in the world and the stories we tell about those things, continues to animate feminist science studies. But we haven’t yet asked what might happen to our sense of scholarship—of meaning, of concept, of learning—if we repose the question of representation, but this time to ourselves? What might we find if we begin from the premise that there is a material connection between the artifact of scholarship, its producers,
Rebekah Sheldon
and its audience? What might emerge if we invite our scholarship to inhabit other forms and allow those forms to emerge through the compositional process? What if we stop taking it for granted that we understand how ideas are transmitted? In answer to the question posed at the start of this paragraph, Grosz writes, “The material plane of forces, energies, and effects that art requires in order to create moments of sensation that are artworks are shared in common with science. Science, like art, plunges itself into the materiality of the universe.”64 Choratic reading is one attempt at the becoming-imperceptible of art, science, and criticism.
Notes My thanks to Richard Grusin, Jamie Skye Bianco, Annie McClanahan, Karen Weingarten, Ted Martin, Julian Gill-Peterson, Joseph Varga, and an anonymous reviewer at the University of Minnesota Press for conversation, speculation, skepticism, camaraderie, and incisive commentary. Thanks also to the Center for 21st Century Studies at the University of Wisconsin-Milwaukee and the 2011–12 Center Fellows for the support needed to write this chapter. 1. Although this holds in general, it is worth noting that one of the key differences between speculative realism (SR) and object-oriented ontology concerns how they understand the consequences of correlationism. Graham Harman explains in “The Well-Wrought Broken Hammer: Object Oriented Literary Criticism,” New Literary History 43, no. 2 (Spring 2012): 183–203, that where SR affirms the descriptive accuracy of mathematical statements about the world, OOO denies absolute knowledge. Hereafter, the article is abbreviated as WWBH in parenthetical citations in the text. 2. Quentin Meillassoux, After Finitude: An Essay on the Necessity of Contingency (London: Continuum, 2010), 5. Hereafter cited as AF in parenthetical citations within the text. 3. Ian Bogost, Alien Phenomenology, or What It’s Like to Be a Thing (Minneapolis: University of Minnesota Press, 2012), 3. 4. Timothy Morton, “They Are Here: My Nonhuman Turn Talk,” Ecology without Nature blog, May 4, 2012, http://ecologywithoutnature .blogspot.com/. 5. Timothy Morton, “Here Comes Everything: The Promise of Object- Oriented Ontology,” Qui Parle 19, no. 2 (Spring/Summer 2011): 163–90. 6. I am aware of the irony involved here.
Form / Matter / Chora
7. Myra Hird and Celia Roberts, “Feminism Theorises the Nonhuman,” Feminist Theory, 12, no. 2 (August 2011): 111. 8. See Donna Haraway, The Companion Species Manifesto: Dogs, People, and Significant Otherness (Chicago: Prickly Paradigm Press, 2003); and Karen Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham, N.C.: Duke University Press, 2007). 9. In this sense feminist new materialism brings together the emphasis on the productivity of discourse in Michel Foucault’s account of power with Baruch Spinoza’s cosmogony. 10. Plato, Timaeus, ed. Oskar Piest, trans. Francis Cornford (New York: Macmillan, 1959). 11. I am keenly aware of the problem with enacting lineages. My effort here is less to posit the differences between moments in feminist inquiry than to show their continuities. However, I do think that some delineation is required in the wake of OOO’s collapse of distinction, as I explain following. 12. Donna Haraway, “The Promise of Monsters: A Regenerative Politics of Inappropriate/d Others,” in Cultural Studies, ed. Lawrence Grossberg, Cary Nelson, and Paula A. Treichler (New York: Routledge, 1992), 296, 297. 13. Valerie Hartouni, Cultural Conceptions: On Reproductive Technologies and the Remaking of Life (Minneapolis: University of Minnesota Press, 1997), 8. 14. Samuel Delany, “Rhetoric of Sex/Discourse of Desire,” in Shorter Views: Queer Thoughts and the Politics of the Paraliterary (Hanover, N.H.: Wesleyan University Press, 1999), 3, 4. 15. Ibid., 5. 16. Luce Irigaray, Speculum of the Other Woman (Ithaca, N.Y.: Cornell University Press, 1985). 17. See Giorgio Agamben, Homo Sacer: Sovereign Power and Bare Life (Stanford, Calif.: Stanford University Press, 1995). 18. This point is also made extensively by ecofeminists; see in particular Carolyn Merchant, The Death of Nature: Women, Ecology, and the Scientific Revolution (New York: Harper and Row, 1980). 19. On the “drama of exposure,” see Eve Kosofsky Sedgwick’s biting and bravura essay “Paranoid Reading, Reparative Reading, or You’re So Paranoid You Probably Think This Essay Is about You,” in Touching Feeling: Affect, Pedagogy, Performativity (Durham, N.C.: Duke University Press, 2003), 123–52. 20. Graham Harman, Guerilla Metaphysics: Phenomenology and the Carpentry of Things (Peru, Ill.: Open Court Press, 2005), 113.
Rebekah Sheldon
21. Karen Barad, “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter,” Signs: Journal of Women in Culture and Society 28, no.3 (2003): 801. 22. Ibid., 820. 23. For the physics behind this oversimplified summary, see Karen Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham, N.C.: Duke University Press, 2007), especially chapter 7. 24. Annamarie Mol, The Body Multiple: Ontology in Medical Practice (Durham, N.C.: Duke University Press, 2002), vii. 25. Myra Hird, The Origins of Sociable Life: Evolution after Science Studies (New York: Palgrave, 2009). 26. Stacy Alaimo, Bodily Natures: Science, Environment, and the Material Self (Bloomington: Indiana University Press, 2010). 27. In this sense, feminist new materialism picks up on Simone de Beauvoir’s tact in The Second Sex of responding to the nature argument by looking directly at nature. 28. Graham Harman, Tool-Being: Heidegger and the Metaphysics of Objects (Peru, Ill.: Open Court Press, 2002), hereafter abbreviated as TB in parenthetical citations in the text. Levi Bryant, Graham Harman, and Nick Srnicek, eds., The Speculative Turn: Continental Materialism and Realism (Melbourne: re.press, 2011), 3. 29. Bryant et al., Speculative Turn, 1. 30. Blog commentary tends to point out that only one woman—Isabelle Stengers—was included in the collection. Though this is a relevant point, my argument is less concerned with inclusion per se than with the logic that drives the absence in the first place. For a good example, see “some background on Harman and Speculative Realism (and cool new book series),” New APPS: Art, Politics, Philosophy, Science blog, February 15, 2011, www.newappsblog.com/. It’s worth noting that the commenter in question is Melinda Cooper whose work in Life as Surplus: Biotechnology and Capitalism in the Neoliberal Era (Seattle: University of Washington Press, 2008) well exemplifies feminist new materialist approaches. 31. This is less an interpretation than a restatement of Harman’s own explicit position on the twin techniques of undermining and overmining in his “On the Undermining of Objects: Grant, Bruno, and Radical Philosophy,” in Bryant et al., The Speculative Turn. 32. Timothy Morton, “Of Lava Lamps and Firehouses,” Ecology without Nature blog, November 14, 2012, http://ecologywithoutnature .blogspot.com/. 33. Bruno Latour, “An Attempt at Writing a Compositionist’s Manifesto,” New Literary History 41, no. 3 (Summer 2010): 471–90.
Form / Matter / Chora
34. Harman, Guerilla Metaphysics, 95. 35. Ibid., 106. 36. Harman, “Undermining,” 24. 37. Graham Harman, Toward Speculative Realism (New York: Zero Books, 2010), 133. 38. Levi Bryant, Democracy of Objects (Ann Arbor, Mich.: Open Humanities Press, 2011). Though I cannot fully discuss it here, staging Shaviro’s chapter, “The Actual Volcano: Whitehead, Harman, and the Problem of Relations” in Bryant et al., The Speculative Turn, next to Eugene Thacker’s After Life (Chicago: University of Chicago Press, 2010) and its rereading of Neoplatonism and scholasticism would be profitable. 39. Bryant is at pains to separate this notion of a substance that subtends qualities from the account of matter we have seen in feminist new materialism. His version, however, is also quite different than Harman’s. For Bryant substance is “an absolutely individual system or organization of powers” (Democracy, 89) internal to the object. Referring to his blue mug, Bryant argues that this view of substance makes an object’s qualities the effect or event of the object’s own endo-relationships. For him “the mug blues” is a more accurate description of events. Though I don’t have the space to elaborate on this point here, Bryant’s notion of virtual proper being is far more relational than Harman’s. 40. Jane Bennett, Vibrant Matter: A Political Ecology of Things (Dur ham, N.C.: Duke University Press, 2010); hereafter cited as VM in parenthetical citations within the text. I call on Bennett because her work, like Barad’s, has been cited as “forerunner” to object-oriented ontology. This assimilation, I am arguing, reveals something about the persistence of things in feminist new materialism. 41. The Latour Litanizer uses Wikipedia to generate random lists. For a fuller explanation, see Bogost, Alien Phenomenology, esp. 95–96. The Litanizer is accessible from his website, www.bogost.com. 42. Hasana Sharp, Spinoza and the Politics of Renaturalization (Chicago: University of Chicago Press, 2011), 36. 43. Thacker, After Life, 208. Also see After Life for an elaboration of these terms. 44. Michel Foucault, History of Sexuality, vol. 1, trans. Robert Hurley (New York: Vintage Books, 1990), 92. 45. On the dispositif, see Michel Foucault, “The Confessions of the Flesh,” in Power/Knowledge: Selected Interviews and Other Writings, 1972–1977, ed. Colin Gordon (New York: Vintage, 1980), especially pages 194–225. On molar versus molecular entities see, Gilles Deleuze and Félix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987), esp. 31–36.
Rebekah Sheldon
46. Foucault, History of Sexuality 1, 92. 47. Manuel De Landa, “Nonorganic Life,” in Incorporations, ed. Jonathan Crary and Sanford Kwinter (New York: Zone Books, 1992), 138. 48. Gilles Deleuze, Foucault, trans. Seán Hand (London: Continuum Books, 1999), 60. 49. De Landa, “Nonorganic,” 138. 50. For a related account of the chora and feminist theory, see Emanu ela Bianchi, “The Interruptive Feminine: Aleatory Time and Feminist Politics,” in Undutiful Daughters: New Directions in Feminist Thought and Practice, eds. Henriette Gunkle, Chrysanthi Nigianni, and Fanny Soderback (New York: Palgrave Macmillan, 2012). 51. Judith Butler, Bodies That Matter: On the Discursive Limits of “Sex” (New York: Routledge, 1993), 42. 52. Harman, Toward Speculative Realism, 133. 53. Butler, Bodies That Matter, 48. 54. Deleuze and Guattari, A Thousand Plateaus, 43. 55. Pheng Cheah, “Non-Dialectical Materialisms,” in New Materialism: Ontology, Agency, and Politics, eds. Diana Coole and Samantha Frost (Durham, N.C.: Duke University Press, 2010), 86. 56. Brian Massumi, Semblance and Event: Activist Philosophy and the Occurrent Arts (Cambridge, Mass.: MIT Press, 2011), 122. 57. Ibid., 1. 58. Bogost, Alien Phenomenology, 133. 59. N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago: University of Chicago Press, 2012). 60. “About” page, Senselab, www.senselab.ca/wp2/about/. 61. For distance and machine reading, see Franco Moretti’s Literary Lab at Stanford University. For a compelling alternative, see the University of Victoria’s Maker Lab and its Kits for Culture project. 62. It is worth noting that this parallel between philosophy and literary criticism suggests that the same episteme subtends them both, contrary to Harman’s assertion at the start of the essay that “the various districts of human knowledge have relative disciplinary autonomy” (WWBH, 183). 63. For a far more interesting version of speculative reading see Eileen A. Joy, “Weird Reading,” Speculations: A Journal of Speculative Realism 4 (2013): 28–34. 64. Elizabeth Grosz, Chaos, Territory, Art: Deleuze and the Framing of the Earth (New York: Columbia University Press, 2008), 61.
9 9 0
Systems and Things On Vital Materialism and Object-Oriented Philosophy
Jane Bennett
The recent turn toward nonhumans in the humanities and social sciences takes place within a complex swarm of other intellectual, affective, scientific, and political-economic trends.1 I think that two such trends are especially relevant. The first is a growing awareness of the accelerating concentration of wealth within neoliberal economies, as expressed by the Occupy movement and by the renewed vitality of Marx-inspired political analyses.2 Marxisms speak powerfully to the desire for a radical, forceful counterresponse to the injustices of global capitalism. But “historical materialisms” are not perceived as offering an equally satisfying response to a second set of trends, those described roughly as ecological: the growing awareness of climate change and the possibility that the Earth may have entered the geo-political epoch of the Anthropocene.3 Here, various vital materialisms arise to supplement and complement historical materialisms. They are inspired by twentieth-century feminisms of the body, as in the work of Simone de Beauvoir, Luce Irigaray, and Judith Butler; the phenomenology of Iris Young; the Spinozism of Moira Gatens; and the creative Darwin of Elizabeth Grosz. They also draw sustenance from a longer tradition of philo sophical materialism in the West, where fleshy, vegetal, mineral materials are encountered not as passive stuff awaiting animation by human or divine power, but as lively forces at work around and within us. Ancient atomism is influential here, especially Lucretius’s clina men or swerving primordia. This image will go on to inspire other theorists, less “atomistic” than Lucretius, to defend other versions 9 223 0
Jane Bennett
of a sensate, ever-evolving universe. The Romantic poet Erasmus Darwin, for example, writes in his 1803 Temple of Nature that “the wrecks of Death are but a change of forms; / Emerging matter from the grave returns, / Feels new desires, with new sensation burns.”4 And in the twentieth century, Michel Serres will spin Lucretian physics into an ontology of noise or the “percolating” vibrancy within all (only apparently stable) things.5 Other notables in this tradition include, as already noted, Baruch Spinoza, for whom every body (person, fly, stone) comes with a conatus or impetus to seek alliances that enhance its vitality; Henry Thoreau, the American naturalist who detects the presence of an effusive, unruly Wildness inside rocks, plants, animals, and locomotives; and Walt Whitman, who “aches with love” for matter because, after all, “Does not all matter, / aching, attract all matter?”6 These and other evocations of lively materiality downplay the differences between organic and inorganic, and they persist despite Immanuel Kant’s 1790 pronouncement that the very idea of lively matter “involves a contradiction, since the essential character of matter is lifelessness, inertia.”7 The nonhuman turn, then, can be understood as a continuation of earlier attempts to depict a world populated not by active subjects and passive objects but by lively and essentially inter active materials, by bodies human and nonhuman. Some of the impetus to reinhabit the tradition also comes from the voluminous mountains of “things” that today surround those of us living in corporate-capitalist, neoliberal, shopping-as-religion cultures. Novelty items, prepackaged edibles, disposable objects, past and future landfill residents, buildings, weeds, books, devices, websites, and so on, and so on—all these materialities make “calls” upon us, demand attention. It’s getting harder not to notice their powers of enabling and refusing us, of enhancing and destroying what we want (to have, to do, to be and become). Theorists of the nonhuman want to see what would happen—to perception and judgment, to sympathies and antipathies, to physical and intellectual postures, to writing styles and research designs, to practices of consumption and production, and to our very notions of self and the human, if what Graham Harman has termed the “allure” of objects were to have more pride of place in our thinking. It no longer seems
Systems and Things
satisfactory to write off this allure as wholly a function of the pathetic fallacy or the projection of voice onto some inanimate stuff.8 Perhaps the big project of the nonhuman turn is to find new techniques, in speech and art and mood, to disclose the partici pation of nonhumans in “our” world. This would require the invention and deployment of a grammar that was less organized around subjects and objects and more capable of avowing the presence of what Bruno Latour called “actants.” It would also require closer attention to the work of those in physical and natural sciences who also recognize the power of materials to shape, induce, even hail other bodies.9 The focus of this chapter, however, is less broad. Its goal is to explore some of the internal philosophical differences within the nonhuman turn, in particular those between object-oriented philosophy and the various vital materialisms currently emerging.10
Withdrawal and Vitality One such difference stems from a tendency for the former to follow Heidegger on the question of “things” and the latter to follow Deleuze and Guattari’s notion of a quite active and expressive “matter-movement” or “matter-energy” that “enters assemblages and leaves them.”11 Heidegger considered the uncanny agency of things in several of his late essays, where the incalculability of the Thing and its persistent withdrawal are emphasized, whereas Deleuze and Guattari highlighted the positive or productive power of things to draw other bodies near and conjoin powers. A key inspiration here is the “Nomadology” section of A Thousand Plateaus, in which Deleuze and Guattari mark the way metal does not just resist human endeavors but has “traits of expression” of its own that push and pull upon the endeavoring body of the metallurgist. “In short, what metal and metallurgy bring to light is a life proper to matter, a vital state of matter as such, a material Vitalism”: Let us return to the example of the saber, or rather of crucible steel. It implies the . . . the melting of the iron at high temperature [and] . . . the successive decarbonations; [but] corresponding to these singularities are traits of
Jane Bennett
expression—not only the hardness, sharpness, and finish, but also the undulations or designs traced by the crystallization and resulting from the internal structure of the cast steel. . . . Each phylum has its own singularities and opera tions, its own qualities and traits, which determine the relation of desire to the technical element (the affects the saber “has” are not the same as those of the sword). . . . At the limit, there is . . . a single machinic phylum, ideally continuous: the flow of matter-movement, the flow of matter in continuous variation, conveying singularities and traits of expression. . . .12 If for Heidegger things expose the limits of human knowing, for Deleuze and Guattari people, places, and things forge heterogeneous connections and form something like a compound, extended mind: “Metallurgy is the consciousness or thought of the matter-flow, and metal the correlate of this consciousness.”13 As the Deleuzean Bernd Herzogenrath puts it, matter is “equipped with the capacity for self-organization—matter is thus alive, informed rather than informe (‘formless’): ‘matter . . . is not dead, brute, homogeneous matter, but a matter-movement bearing singularities . . . , qualities and even operations.’ ”14 Object-oriented ontologists or speculative realists, and I will be using Graham Harman and Tim Morton as my examples, are instead attracted to Heidegger’s focus on the object’s negative power, its persistent withdrawal from any attempt to engage, use, or know it. Indeed, “objects” could not hope for more staunch defenders than Harman and Morton, who include in the category objects pretty much everything: human individuals, literary texts, alcohol, spoons, plants. An object is, says Morton, a “weird entity withdrawn from access, yet somehow manifest.”15 Withdrawn and manifest. Withdrawn: even as a rat or a plastic bottle cap is producing an arresting effect on me and captures my attention, the speculative realist (who eschews the label “materialist”) insists that none of the bodies at the scene were wholly present to each other. Objects exist, says Harman, as “entities . . . quite apart from any relations with or effects upon other entities in the world.”16 Mani fest: despite this apartness, objects are coy, always leaving hints of
Systems and Things
a secret otherworld, “alluding” to an “inscrutable” reality “behind the accessible theoretical, practical, or perceptual qualities.” Objects are expert players of the game of hide-and-seek. It is thus important not to overstate the contrast between the Harman and Morton on the one hand and a vital materialist position on the other. The status of the object’s Heideggerian withdrawal is not quite that of a human postulate: insofar as it is something that we sense, it is something that comes from the outside. The thing’s act of seeking cover is, says Heidegger, a “draft” from the “Open,” a beckoning call of sorts.17 In what follows, I consider the way such a figure of the object and its powers is positioned by Morton and Harman as a repudiation of “holism.” They include in that category assemblage-theories of various sorts, in which circulate bits and pieces of Deleuze, Latour, Manning, De Landa, Massumi, Haraway, Shaviro, Whitehead, Spinoza, Foucault, Romantic poets. I also try to make explicit just what turns—politically, ethically—on the object-philosophers’ strong claims about the apartness of objects. What difference would it make if I came to experience myself more explicitly as one essentially elusive object among others? What is at stake for political and ethical life (in North America?) in the fight against systems-or process-theories, especially since all the parties share a critique of linguistic and social constructivism? Since, that is, all parties see the nonhuman turn as a response to an overconfidence about human power that was embedded in the postmodernism of the 1980s and 1990s?
Relationality At the heart of object-oriented ontology, says Harman, is a “deeply non-relational conception of the reality of things.”18 But why such a “deep” animosity toward relational ontologies? One minor motive may be the pleasure of iconoclasm: for Harman and Morton “networks, negotiations, relations, interactions, and dynamic fluctuations” are golden calves—and they enjoy smashing idols. (Who doesn’t?) System-oriented theory has, they say, already had its day and no longer yields philosophically interesting problems: the “programmatic movement toward holistic interaction is an idea
Jane Bennett
once but no longer liberating,” and “the real discoveries now lie on the other side of the yard.”19 But the stakes are higher, too. In “Aesthetics as First Philosophy,” Harman implies that there are ethico-political as well as philo sophical liabilities to a relational or network or open-system or umwelt approach: a “vision of holistic interactions in a reciprocal web . . . this blurring of boundaries between one thing and another, has held the moral high ground in philosophy for too long. . . . The political reflexes associated with terms such as essence (‘bad’) and reciprocal interplay (‘good”) must be recalibrated. . . .”20 Elsewhere, Harman goes so far as to call it a “prejudice” to approach the world in terms of “complex feedback networks rather than integers.”21 But what is the alternative to “prejudice” here? It could be something like reasoned judgment, in which case the claim would be that object-oriented philosophy is more rationally defensible, a newbie perspective less encrusted with unthinking habit, mainstream culture, or normal subjectivity than relation-oriented theory. Or it may be that Harman would acknowledge that object-oriented philosophy itself includes a prejudice in favor of (a theoretical privi leging of or conceptual honing in on) the mysterious object. But what this would then call for is an explicit account of the virtues or stakes of favoring mysterious objects over complex systems of relations, virtual and actual. But maybe there is no need to choose between objects or their relations. Since everyday, earthly experience routinely identifies some effects as coming from individual objects and some from larger systems (or, better put, from individuations within material configurations and from the complex assemblages in which they participate), why not aim for a theory that toggles between both kinds or magnitudes of “unit”? One would then understand “objects” to be those swirls of matter, energy, and incipience that can hold themselves together long enough to vie with the strivings of other objects, including the indeterminate momentum of the throbbing whole. The project, then, would be to make both “objects” and “relations” the periodic focus of theoretical attention, even if it remained impossible to articulate fully the “vague” or “vagabond” essence of “thing” or any “system.”22 Even if, that is, one could not give equal attention to both at once.
Systems and Things
This is just what those passé philosophers Deleuze and Guattari do in A Thousand Plateaus. One of their figures for (what I am calling) the system dimension is “assemblage,” another is “plane of consistency.” The latter is characterized by Deleuze and Guattari as “in no way an undifferentiated aggregate of unformed matter.”23 (Neither assemblage nor plane of consistency qualifies as what Harman describes as a “relational wildfire in which all individual elements are consumed.”24) My point, in short, is that despite their robust attempts to conceptualize groupings, Deleuze and Guattari also manage to attend carefully to many specific entities—to horses, shoes, orchids, packs of wolves, wasps, priests, metals, and so on. Indeed, I find nothing in their approach inconsistent with the object-oriented philosopher’s claim that things harbor a differential between their inside and outside or an irreducible moment of (withdrawn-from-view) interiority. The example of A Thousand Plateaus also highlights the obvious point that not all theories of relationality, even if monistic, are holistic on the model of a smoothly functioning organism. There are harmonious holisms but also fractious models of systematicity, which allow for heterogeneity within and even emergent novelty, onto-pictures that are formally monistic but substantively plural.25 The whole can be imaged as a self-diversifying process of territorializations and deterritorializations (Deleuze and Guattari) or as creative process (Bergson, Whitehead), or as some combination therein (the various new materialisms).26 Or take the model of relationality that William Connolly, following William James, calls “protean connectionism”: in contrast to both methodological individualism and organic holism, connectionism figures relations as “typically loose, incomplete, and themselves susceptible to potential change. . . . The connections are punctuated by ‘litter’ circulating in, between, and around them. Viewed temporally . . . connectionism presents a world in the making in an evolving universe that is open to an uncertain degree.”27 It makes sense to try to do justice both to systems and things—to acknowledge the stubborn reality of individuation and the essentially distributive quality of their affectivity. Harman rejects the very framing of the issue as things- operating-in-s ystems in favor of an object-oriented picture in which aloof objects are positioned as the sole locus of activity. On
Jane Bennett
occasion, however, even Harman finds himself theorizing a kind of relation—he calls it “communication”—between objects. He does try to insulate this object-to-object encounter from depictions that also locate activity in the relationships themselves or at the systemic level of operation, but I do not think that this parsing attempt succeeds. Neither do I quite see why it is worth the trouble, though it does bespeak of the purity of Harman’s commitment to the aloof object: “The real problem is not how beings interact in a system: instead, the problem is how they withdraw from that system as independent realities while somehow communicating through the proximity, the touching without touching, that has been termed allusion or allure. . . .”28 I concur that some dimensions of bodies are withdrawn from presence but see this as partly due to the role they play in this or that relatively open system. In the text just quoted, Harman goes on to defend the view that communication via proximity is not limited to that between human bodies. The materialist would agree. Morton makes a similar, anti-anthropocentric point when he says that “what spoons do when they scoop up soup is not very different from what I do when I talk about spoons. . . . Not because the spoon is alive or intelligent (panpsychism), but because intelligence and being alive are aesthetic appearances—for some other phenomenon, including the object in question.”29 By engaging in what Bruno Latour might call a “horizontalizing” of the ontological plane, Morton and Harman allow their ecological sympathies to come to the fore, sympathies that, in Harman’s case, might not be so apparent given his philosopher’s insistence that objects of thought are objects, too. In the following quotation, however, Harman concerns himself with ordinary (non-ideational) objects: If it is true that other humans signal to me without being fully present, and equally true that I never exhaust the depths of non-sentient beings such as apples and sandpaper, this is not some special pathos of human finitude. . . . When avalanches slam into abandoned cars, or snowflakes rustle the needles of the quivering pine, even these objects cannot touch the full reality of one another. Yet they affect one another nonetheless.30
Systems and Things
Hyperobjects Morton also agrees that process or assemblage are undesirable conceptualizations (“Objects are [ontologically] prior to relations”), and he shares the judgment that attempts to juggle both system and thing are ultimately “reductionist.”31 The reduction consists in the fact that for Deleuze et al., “some things are more real than others: flowing liquids become templates for everything else” and thus there is a failure to “explain the givenness of the ontic phenomenon.”32 Morton here helpfully points out the way in which ontologies of becoming tend to be biased toward the specific rhythms and scales of the human body. Morton also exposes the human body–centric nature of the figure of a “flowing liquids” ontology: “I marvel at the way . . . syrup lugubriously slimes its way out of a bottle. . . . But to a hypothetical four-dimensional sentient being, such an event would be an unremarkable static object, while to a neutrino the slow gobs of syrup are of no consequence whatsoever. There is no reason to elevate the lava lamp fluidity . . . into the archetypical thing.”33 Perhaps there is no reason to do so—if, that is, we are in fact capable of transcending the provincial pro-human-conatus perspective from which we apprehend the world. If we are not, then a good tack might be to stretch and strain those modes to make room for the outlooks, rhythms, and trajectories of a greater number of actants, to, that is, get a better sense of the “operating system” upon which we humans rely. Morton also offers a pragmatic, political rationale for his devotion to the coy object: no model of the whole (flowing or other wise) can today help us cope with what he elsewhere calls “hyper objects.”34 And this is the part of his position that raises the strongest objection to even a fractious-assemblage model. “Hyperobjects are phenomena such as radioactive materials and global warming.” They are “mind-blowing” entities, because their ahuman time scales and the extremely large or vastly diffused quality of their occupation of space unravels the very notion of “entity.” It also becomes hard to see how it is possible to think hyperobjects by placing them within a larger “whole” within which we humans are a meaningful part, because hyperobjects render us kind of moot. For Morton, “this means that we need some other
Jane Bennett
basis for making decisions about a future to which we have no real sense of connection.” Evidence of the unthinkability of the hyperobject “climate change” is the fact that conversations about it often devolve into the more conceptualizable and manageable topic of weather. Weather, even with its large theater of operation, remains susceptible to probabilistic analysis, and it can still be associated with the idea of a (highly complex) natural order. Weather, in short, is still an “object.” But with “climate change,” it’s much harder, impossible, really, says Morton, to sustain a sense of the existence of “a neutral background against which human events can become meaningful. . . . Climate change represents the possibility that the cycles and repetitions we come to depend on for our sense of stability and place in the world may be the harbingers of cataclysmic change.”35
Modesty In recent essays, Morton and Harman focus their objection to relationism around the claim that “ ‘everything is connected’ is one of those methods that has long since entered its decadence, and must be abandoned.”36 Here again we see that one of the reasons for their rejection of “relationism” is that it distracts attention from the “non-connections” between objects (their withdrawn nature). But what, precisely, are the ill effects they fear? Harman and Morton don’t say it outright, but it seems clear that one of their targets is human hubris. Their claim about the withdrawal of the object functions as a litany, a rhetorical tic that suggests something about the ethical impetus behind their position: object-orientedness is (what Foucault would call) a technique of self that seeks to counter the conceit of human reason and to chastise (what Nietzsche called) the “will to truth.” The desire to cultivate theoretical modesty is indeed noble. But object-oriented philosophy has no monopoly on the means to this end. Contemporary materialisms (inspired by Deleuze, Thoreau, Spinoza, Latour, neuroscience, or other sources) that affirm a vitality or creative power of bodies and forces at all ranges or scales also cut against the hubris of human exceptionalism. Morton’s wholesale rejection of materiality as a term of art is perplexing; he seems
Systems and Things
to recognize no version except that associated with matter as a flat, fixed, or law-like substrate. What is more, does not a focus on the sensuous stuff of bodies save relationism from the “hologram” version that Harman rightly criticizes?37 I find myself living in a world populated by materially diverse, lively bodies. In this materialism, things—what is special about them given their sensuous specificity, their particular material configuration, and their distinctive, idiosyncratic history—matter a lot. But so do the eccentric assemblages that they form. Earthy bodies, of various but always finite durations, affect and are affected by one another. And they form noisy systems or temporary working assemblages that are, as much as any individuated thing, loci of effectivity and allure. These (sometimes stubborn and voracious but never closed or sovereign) systems enact real change. They give rise to new configurations, individuations, patterns of affection. Networks of things display differential degrees of creativity, for good or for ill from the human point of view.38 There may be creative evolution at the system level, if Bergson and contemporary complexity theorists are on the right track.39 I say this because Harman argues that a philosophy such as mine, which connects hiding-and-seeking objects to assemblages, can have no account of change. This is because, the argument continues, there must be an unactualized surplus for something to happen differently. But systems as well as things can house an underdetermined surplus, and assemblage-theories can offer an account of the emergence of novelty without also rendering the trajectory, impetus, drive, or energetic push of any existing body epiphenomenal to its relations.
Objects / Things / Bodies Harman says that the distinction between “objects” and “things” is irrelevant for his purposes, perhaps because he does not want to restrict himself unduly to the (weird) physicality of objects or to the power that they exhibit in (relatively) direct, bodily encounters with us. I am more focused on this “naturalist” or Romantic realm, and here I find the term thing or body better as a marker of individuation, better at highlighting the way certain edges within
Jane Bennett
an assemblage tend to stand out to certain classes of bodies. (The smell and movement of the mammal to the tick, to invoke Jakob von Uexküll’s famous example.40) “Thing” or “body” has advantages over “object,” I think, if one’s task is to disrupt the political parsing that yields only active (manly, American) subjects and passive objects. Why try to disrupt this parsing? Because we are daily confronted with evidence of nonhuman vitalities actively at work around and within us. I also do so because the frame of subjects- and-objects is unfriendly to the intensified ecological awareness that we need if we are to respond intelligently to signs of the breakdown of the Earth’s carrying capacity for human life.
Texts as Special Bodies I close by turning briefly to things that are literary: the essay and the poem. Like all bodies, these literary objects are affected by other bodies, or, as Morton puts it, “A poem is not simply a representation, but rather a nonhuman agent.”41 I also proclaim that the effectivity of a text-body, including its ability to gesture toward a something more, is a function of a distributive network of bodies: words on the page, words in the reader’s imagination, sounds of words, sounds and smells in the reading room, and so on, and so on—all these bodies co-acting are what do the job. There are also, it seems, some features of the text-body that are not shared or shared differentially by bodies that rely more heavily on smell and touch, and less heavily on the conveyances that are words. I’m not qualified to say too much about the affectivity of a text as a material body, and I can only gesture in the direction that Walt Whitman takes when he says that poetry, if enmeshed in a fortuitous assemblage of other (especially nontext) bodies, can have material effects as real as any. If you read Leaves of Grass in conjunction with “the open air every season of every year of your life,” and also bound in affection to “the earth and sun and the animals,” while also going “freely with powerful uneducated persons and with the young and with the mothers of families,” then “your very flesh shall be a great poem and have the richest fluency not only in its words but in the silent lines of its lips and face and
Systems and Things
between the lashes of your eyes and in every motion and joint of your body.”42 Texts are bodies that can light up, by rendering human perception more acute, those bodies whose favored vehicle of affectivity is less wordy: plants, animals, blades of grass, household objects, trash. Another example is this passage from Finnegans Wake, where Joyce describes Shem the Hoarder’s living space: The warped flooring of the lair and soundconducting walls thereof, to say nothing of the uprights and imposts, were persianly literatured with burst loveletters, telltale stories, stickyback snaps, doubtful eggshells, bouchers, flints, borers, puffers, amygdaloid almonds, rindless raisins, alphybettyformed verbage, vivlical viasses, ompiter dictas, visus umbique, ahems and ahahs, imeffible tries at speech unasyllabled, you owe mes, eyoldhyms, fluefoul smut, fallen lucifers, vestas which had served, showered ornaments, borrowed brogues, reversible jackets, blackeye lenses, family jars, falsehair shirts, Godforsaken scapulars, neverworn breeches, cutthroat ties, counterfeit franks, best intentions, curried notes, upset latten tintacks, unused mill and stumpling stones, twisted quills, painful digests, magnifying wineglasses, solid objects cast at goblins, once current puns, quashed quotatoes, messes of mottage.43 Perhaps the most important stake for me of the nonhuman turn is how it might help us live more sustainably, with less violence toward a variety of bodies. Poetry can help us feel more of the liveliness hidden in such things and reveal more of the threads of connection binding our fate to theirs.
Notes 1. To cite just some examples: Michelle Bastian, “Inventing Nature: Re-w riting Time and Agency in a More-Than-Human-World,” Australian Humanities Reviews 47 (2010): 99–116; Nicky Gregson, H. Watkins, and M. Calestant, “Inextinguishable Fibres: Demolition and the
Jane Bennett
Vital Materialisms of Asbestos,” Environment and Planning A 42, no. 5 (2010): 1065–83; Steven Shaviro, “The Universe of Things,” www.shaviro .com/Othertexts/Things.pdf; Graham Harman, “The Assemblage Theory of Society,” in Towards Speculative Realism (Winchester, U.K.: Zero Books, 2010); Aaron Goodfellow, “Pharmaceutical Intimacy: Sex, Death, and Methamphetamine,” Home Cultures 5, no. 3 (2008): 271–300; Anand Pandian, “Landscapes of Expression: Affective Encounters in South Indian Cinema,” Cinema Journal 51, no. 1 (Fall 2011): 50–74; Eileen A. Joy and Craig Dionne, eds., “When Did We Become Post/Human?,” special issue, Postmedieval: A Journal of Medieval Cultural Studies 1, no. 1/2 (Spring/Summer 2010); Jussi Parikka, Insect Media: An Archaeology of Animals and Technology (Minneapolis: University of Minnesota Press, 2010); Bruce Braun and Sarah Whatmore, “The Stuff of Politics: An Intro duction,” in Political Matter: Technoscience, Democracy, and Public Life (Minneapolis: University of Minnesota Press, 2010); Stefanie Fishel, “New Metaphors for Global Living,” PhD diss., Johns Hopkins University, 2011; and Jane Bennett, Vibrant Matter (Durham, N.C.: Duke University Press, 2010). 2. Jodi Dean articulates clearly this desire: “Instead of a politics thought primarily in terms of resistance, playful and momentary aesthetic disruptions, the immediate specificity of local projects, and struggles for hegemony within a capitalist, parliamentary, setting, the communist horizon impresses on us the necessity of the abolition of capitalism and the creation of global practices and institutions of egalitarian cooperation.” Jodi Dean, “The Communist Horizon,” Kasama Project, August 30, 2012, http://kasamaproject.org/. See also Jodi Dean, The Communist Horizon (London: Verso, 2012). 3. Ian Baucom’s 2013 seminar for the School for Criticism and Theory at Cornell University describes the issue thus: “Human history, human culture, human society have now come to possess a truly geological force, a capacity not only to shape the local environments of forests, river-systems, and desert terrain, but to effect, catastrophically, the core future of the planet as we enter into the long era of what the atmospheric chemist Paul Crutzen and other climate researchers have called the ‘Anthropocene.’ ” http://english.duke.edu/uploads/media_items/spring-2013 -course-descriptions-graduate-only-rev5.original.pdf. 4. Erasmus Darwin, Temple of Nature; or, The Origin of Society: A Poem, with Philosophical Notes (London: J. Johnson, 1803), canto 4, lines 398–400; electronic ed., ed. Martin Priestman, www.rc.umd.edu/editions/ darwin_temple/toc.html. 5. See Michel Serres, Genesis, trans. Geneviève James and James
Systems and Things
Nielson (Ann Arbor: University of Michigan Press, 1995), and The Birth of Physics, trans. Jack Hawkes (Manchester, U.K.: Clinamen Press, 2000). 6. Walt Whitman, “I Am He Who Aches with Love,” book 4, Children of Adam, Leaves of Grass. 7. “The concept of it [lively matter] involves a contradiction, since the essential character of matter is lifelessness, inertia” (emphasis in origi nal). Immanuel Kant, Critique of Judgment (1790), sec. 73, #394. Given Kant’s own flirtation with Johann Blumenbach’s notion of Bildungstrieb, I can’t help but hear the definitiveness of his claim here as an attempt to ward off a vitalism gestating inside his own thinking. 8. I think that the notions of “pathetic fallacy” and “prosopopoeia,” even if stretched creatively, are not right for my project. Satoshi Nishimura defines the former as the “ascription of human characteristic to inanimate objects, which takes place when reason comes under the influence of intense emotion.” Satoshi Nishimura, “Thomas Hardy and the Language of the Inanimate,” Studies in English Literature: 1500–1900 43, no. 4 (Autumn 2003): 897. The notion of a pathetic fallacy, like that of prosopopoeia, assumes and insinuates that only humans (or God) can indeed engage in transmissions across bodies. The pathetic fallacy and prosopopoeia remain closely aligned with Kant’s categorical distinction between life and matter. 9. My Vibrant Matter was in part a response to a call from some lively matter: some items of trash on the street on a sunny day called me over to them and for a few hyper-real moments I saw from the inside out (so to speak) how I too was an element in an assemblage that included these other things. My attempt to give an account of this encounter confounded and was confounded by the grammar of subjects and objects. 10. These latter are sometimes called “new” materialism, but that is unfortunate, for the label both imposes an impossible burden of creation (ex nihilo!) on the “new” materialisms and disrespects the ongoing vitality and political importance of the “old,” historical materialisms. 11. Gilles Deleuze and Félix Guattari, A Thousand Plateaus, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1986; Continuum Books, 2004), 449. Citations refer to the Continuum edition. 12. Ibid., 448. 13. Ibid., 454. 14. Bernd Herzogenrath, “Nature/Geophilosophy/Machinics/Ecosophy,” in Deleuze/Guattari & Ecology, ed. Bernd Herzogenrath (New York: Palgrave, 2009), 6. 15. Timothy Morton, “An Object-Oriented Defense of Poetry,” New Literary History 43, no. 2 (Spring 2012): 208.
Jane Bennett
16. Graham Harman, “The Well-Wrought Broken Hammer,” New Literary History 43, no. 2 (Spring 2012): 187. 17. See Martin Heidegger, “The Age of the World Picture,” in The Question Concerning Technology, and Other Essays, trans. William Lovitt (New York: Harper, 1982). “Everyday opinion sees in the shadow only the lack of light, if not light’s complete denial. In truth, however, the shadow is a manifest, though impenetrable, testimony to the concealed emitting of light. In keeping with this concept of shadow, we experience the incalculable as that which, withdrawn from representation, is nevertheless manifest in whatever is, pointing to Being, which remains concealed” (appendix 13, p. 154). Related is Graham Harman’s notion of the “allure” of the object’s mysterious withdrawal from the realm of our knowing. Graham Harman, Guerrilla Metaphysics: Phenomenology and the Carpentry of Things (Chicago: Open Court Press, 2005). 18. Harman, “Well-Wrought Broken Hammer,” 187. 19. Ibid., 187–88. 20. Graham Harman, “Aesthetics as First Philosophy: Levinas and the Non-Human,” Naked Punch 9 (Summer/Fall 2007): 21–30. 21. Harman, “Well-Wrought Broken Hammer,”188. 22. The terms are those of Deleuze and Guattari in A Thousand Plateaus: “It seems to us that Husserl brought thought to the decisive step forward when he discovered a region of vague and material essences (in other words, essences that are vagabond, inexact and yet rigorous), distinguishing them from fixed, metric and formal essence. . . . They constitute fuzzy aggregates. They relate to a corporeality (materiality) that is not to be confused either with an intelligible, formal essentiality or a sensible, formed and perceived thinghood” (emphases in original, 449–50). 23. Deleuze and Guattari, A Thousand Plateaus, 78. 24. Harman, “Well-Wrought Broken Hammer,” 191. 25. My thanks to Alex Livingston for this formulation. 26. As Katrin Pahl shows in Tropes of Transport: Hegel and Emotions (Evanston, Ill.: Northwestern University Press, 2012), Hegel too offers a holism or relationism at odds with the organic model. 27. William Connolly, A World of Becoming (Durham, N.C.: Duke University Press, 2011), 35. 28. Harman, “Aesthetics as First Philosophy,” 25. 29. Morton, “An Object-Oriented Defense,” 215. 30. Harman, “Aesthetics as First Philosophy,” 30. 31. Morton, “An Object-Oriented Defense,” 217. 32. Ibid., 208. 33. Ibid.
Systems and Things
34. And in defining the stakes ecologically, I reveal the presence in me of a bias toward human bodies (even as I share Harman’s and Morton’s desire to become more alert to the power, beauty, and danger of the nonhumans around and within a human body). 35. Timothy Morton, “Hyperobjects and the End of Common Sense,” The Contemporary Condition blog, March 10, 2010, http://contemporary condition.blogspot.com/. 36. Harman, “The Well-Wrought Broken Hammer,” 201. 37. Harman is right, I think, to note that “both philosophical and political problems arise when individual selves and texts are described as holograms, as the relational effects of hostile others and disciplinary power” (“The Well-Wrought Broken Hammer,” 193). But I do not think that there are many theorists in the humanities who still today would endorse such a strong version of constructivism. 38. A political example of this creative power is William Connolly’s account of how the “evangelical-capitalist resonance machine” induced from out of the (human and nonhuman) bodies of American culture a new set of actants: Christian-fundamentalist-free marketeers. William Connolly, “The Evangelical-Capitalist Resonance Machine,” Political Theory 33, no. 6 (December 2005): 869–86. 39. See, for example, Stuart A. Kauffman, Reinventing the Sacred: A New View of Science, Reason and Religion (New York: Basic Books, 2008), and Terence Deacon, Incomplete Nature: How Mind Emerged from Matter (New York: Norton, 2012). 40. Jakob von Uexküll, A Foray into the Worlds of Animals and Humans: with, A Theory of Meaning, trans. Joseph D. O’Neil (Minneapolis: University of Minnesota Press, 2011; originally published by Verlag von Julius Springer, 1934). 41. Morton, “An Object-Oriented Defense,” 215. 42. Walt Whitman, “Preface 1855,” in Leaves of Grass and Other Writings, ed. Michael Moon (New York: Norton, 2002), 622. 43. James Joyce, Finnegans Wake (Oxford: Oxford University Press, 2012), 183.
This page intentionally left blank
Acknowledgments This book originated in a conference, The Nonhuman Turn in 21st Century Studies, which was hosted at the Center for 21st Century Studies (C21), University of Wisconsin–Milwaukee, on May 3–5, 2012. I first thank the conference organizing committee: Mary Mullen, C21 deputy director; John Blum, C21 associate director; and Rebekah Sheldon, the 2011–12 C21 Provost Postdoctoral Fellow. Together the committee proved essential to the success of the conference in so many ways: the development of the initial call for papers; the assembly of a list of extraordinary plenary speakers; the selection of papers and the formation of panels for the conferences breakout sessions; and the overall organization and administration of the conference. I also thank Mike Darnell and Stephanie Willingham, who worked as successive business managers for the Center, for their good work, as well as the two C21 graduate project assistants, Selene Jaouadi-Escalera and Kendrick Gardner, for their hard work in making the conference a success. Finally I thank Johannes Britz, provost and vice chancellor for Academic Affairs; Rodney Swain, dean of the College of Letters and Sciences; and Jennifer Watson, associate dean of the College of Letters and Science, for their support. The editing and production of this book could not have happened without the exceptional work of John Blum, who serves as the Center’s editor. John is a careful and tireless manuscript edi tor who brings a wealth of experience to the task of manuscript preparation. I also thank Doug Armato, director of University of Minnesota Press, for his role in bringing the Center’s book series to Minnesota. The Nonhuman Turn is the first publication in what promises to be an extended run. I thank Erin Warholm- Wohlenhaus and Alicia Gomez for helping bring the book into print. Finally, I express my thanks to current C21 deputy director Emily Clark for her unparalleled leadership and to the book’s contributors, without whom this book would not exist. 9 241 0
This page intentionally left blank
Contributors Jane Bennett is professor of political science at Johns Hopkins University. She is editor of the journal Political Theory and author of Vibrant Matter: A Political Ecology of Things. Ian Bogost is the Ivan Allen College Distinguished Chair in media studies and professor of interactive computing at the Georgia Institute of Technology, where he also holds an appointment in the Scheller College of Business. He is founding partner at Persuasive Games LLC, an independent game studio, and contributing editor to the Atlantic. He is the author of Alien Phenomenology, or What It’s Like to Be a Thing (Minnesota, 2012) and How to Do Things with Videogames (Minnesota, 2011). Wendy Hui Kyong Chun is professor of modern culture and media at Brown University. She is the author of Programmed Visions: Software and Memory and Control and Freedom: Power and Paranoia in the Age of Fiber Optics. Richard Grusin is director of the Center for 21st Century Studies and professor of English at the University of Wisconsin– Milwaukee. He is the author of Premediation: Affect and Mediality after 9/11; Culture, Technology, and the Creation of America’s National Parks; and (with Jay David Bolter) Remediation: Understanding New Media. Mark B. N. Hansen is professor of literature, and media arts and sciences, at Duke University. He is the author of Feed-Forward: On the Future of Twenty-First-Century Media; Bodies in Code: Interfaces with New Media; and New Philosophy for New Media.
9 243 0
Erin Manning holds a university research chair in relational art and philosophy at Concordia University (Montreal, Canada), where she is director of the SenseLab (www.senselab.ca). She is coauthor (with Brian Massumi) of Thought in the Act: Passages in the Ecology of Experience (Minnesota, 2014) and author of Always More Than One: Individuation’s Dance and Relationscapes: Movement, Art, Philosophy. Brian Massumi is professor of communication at the University of Montreal. He is coauthor (with Erin Manning) of Thought in the Act: Passages in the Ecology of Experience (Minnesota, 2014), author of Semblance and Event: Activist Philosophy and the Occurrent Arts, and translator of Gilles Deleuze and Félix Guattari’s A Thousand Plateaus. Timothy Morton is Rita Shea Guffey Chair of English at Rice University. He is the author of Hyperobjects: Philosophy and Ecology after the End of the World (Minnesota, 2013); Realist Magic: Objects, Ontology, Causality; The Ecological Thought; and Ecology without Nature. Steven Shaviro is the DeRoy Professor of English at Wayne State University. He is the author of The Universe of Things: On Speculative Realism (Minnesota, 2014); Post Cinematic Affect; Without Criteria: Kant, Whitehead, Deleuze, and Aesthetics; and Connected, or, What It Means to Live in the Network Society (Minnesota, 2003). Rebekah Sheldon is assistant professor of English at Indiana Uni versity. She has received postdoctoral fellowships in the School of Literature, Media, and Communication at Georgia Tech and at the Center for 21st Century Studies at the University of Wisconsin– Milwaukee.
Index abstraction, 2, 14, 60, 62, 91; and computer programming, 157–58; language as, 150, 152, 155, 157; and Whitehead, 31, 46–47, 49, 73–74, 132 actor-network theory, viii, xv–xvi, 202. See also Latour, Bruno affect: of fear, 105; as human and nonhuman, xvii–xviii; scholarly, 216; turn to, xvii affect theory, viii, xv–xviii, 193; and trauma theory, xviii affectivity, xix, xxiv, 111, 112, 229; of a text, 234, 235; and Whitehead, xvi–xvii After Finitude (Meillassoux), 193–94, 200–201, 203 Agamben, Giorgio, 149, 154, 163n30, 199. See also state of exception agency: of epistemology, 196; of the future, 108, 113, 123; and Heidegger, 225; human, xx, xxv, 129, 139–40, 148–49, 154, 158; human and nonhuman, xv, xxvi, 214; on a network, 154; new media, 144; nonhuman, viii, xi–xii, xvi, xvii, 187–88, 195, 217; and performativity, 201–2; of power, 208; and the subject, 122; systemic, 212; technical, 129; and tendency, 118; of a video camera, 182
agent. See agency agential realism, 201–2 Agre, Philip, 157–58 Alaimo, Stacy, 202 Alien Phenomenology (Bogost), 25, 85, 88, 99, 183, 194 animality: and artfulness, 2, 10, 75; and humanity, xxii, 10, 13–14; and sympathy, 11, 14. See also animals; instinct; sympathy animals: and adaptation, 7; and affectivity, xvii; artfulness of, 75; experience of, 4, 7; as non human, vii, x, xix, xxi–xxii, 21, 24, 208; as thinking and feeling, 21, 29–30; for Thoreau, 224; for Whitman, 234–35. See also animality; instinct; sympathy Anthropocene, vii, 185–86, 188, 223, 236n3 anthropocentrism, 20, 24, 167, 195, 217; anti-, 230 anthropomorphism, 24–25, 151–52, 155, 181 anxiety, 169, 172, 175, 184–86, 189–90; and Anthropocene, 185; global, 168; low-level, 112 aperture, 170, 174, 186 Aristotle, 174, 196, 207 assemblage: material as, 233; mind as, 179; nonhuman turn as, x; nonhumans as, 179; objects as,
9 245 0
206, 228; society as, xviii; text as, xxvi; trans-individual, 204; undesirability of, 231; video technology as, 182–83. See also assemblage theory; Latour, Bruno assemblage theory, viii, xviii, 227; and Bennett, xxvii, 208, 233–34, 237n9; and Deleuze and Guattari, 225, 229, 231; and Harman, 229; opposition to, 231. See also assemblage; Latour, Bruno atomism, 183; in Erasmus Darwin, 224; in Lucretius, 22, 35–36, 223–24; in Serres, 224; in Spinoza, 224; in Thoreau, 224; in Whitehead, 22, 37, 74; in Whitman, 224 Barad, Karen, 32, 206, 209, 221n40; intra-actions, 196, 202; “Posthumanist Performativity,” 201–2 Barlow, John Perry, 141, 142 Basil, Toni, xxv, 167, 168, 173, 176, 179, 184 becoming: animal, xxii, 2, 11, 13–14; and art, 10, 45, 60, 62, 63; and constructivism, xi, xviii; and desire, 9, 11, 13; -event, 73–74; of the future, 123, 125; -imperceptible, 218; ontologies of, 231; -other, 23; Simondian ethics of, x; -society, 74; and sympathy, 11, 13, 65, 72; and time, 48, 60, 211; of the world, 116, 121, 125. See also under being being: as abstraction, 46; and appearance, 177, 183; and becoming, 48, 74, 196, 207,
211–12; and desire, 9; heart of, 49; and Heidegger, 175, 185, 238n17; and Levinas, 186; and mind, 19, 24, 40, 187, 193–94, 230; pain of, 169; problem of, 25; and time, 48, 51; tool-, 174, 203, 206; unchanging nature of, 207; and writing, 86. See also becoming Benjamin, Walter, xvi, 147 Bennett, Jane: and feminist new materialism, 196; strategic anthropomorphism, 181; Vibrant Matter, xxvi, 208–9 Bergson, Henri, xvii, 12, 229; Creative Evolution, 1, 8, 233; and instinct, 8, 10. See also intuition; sympathy Bing (search engine), 147–48 Bogost, Ian, 214. See also Alien Phenomenology; Latour Litanizer book-making, 91–93 Brandom, Robert, 21, 24 Brennan, John, 101–4, 106–11, 113, 123, 133n3, 134n17. See also preemption broken tool, 168, 170, 173, 177; of Eno, 176, 182, 187; of Heidegger, xxvi, 175–76; memory as, 181; of modernity, 167 Bryant, Levi, 23, 86, 207, 221n39. See also Speculative Turn, The Buckaroo Banzai, 172 Buddhism, 169 Bush, President George W.: administration of, xxiv–xxv, 104, 106–8, 111, 134n14, 160 Butler, Judith: and chora, 207, 212; Excitable Speech, 153, 155 Byrne, David, 167, 169, 176, 179, 184
Index Cheah, Pheng, 213 Checking In / Checking Out (Schaberg and Yakich), 92 choreographic object (Forsythe), 52, 53, 70 chroma-keying, 173, 175, 176–77, 179, 188 climate change, vii, 167, 223, 231–32; and George W. Bush administration, 160 code: and crisis, 140, 149, 158; legal, 139, 152–54; moral, 74, 139. See also computer code; software; source code Coleman, Sam, 33, 35–36 Coleridge, Samuel Taylor, 186, 187, 188 computer code: as end of democracy, 153–54; executable, 151–53, 156–57; as language, 151–55, 157–58; as logos, xxv, 140, 149–57, 161n3. See also code; software; source code Connolly, William, 229, 239n38 consciousness, xxii, 13–14; Chalmers’s example of, 29; for Deleuze and Guattari, 226; in human experience, 116; problem of, 36–37; Ruyer’s definition of, 2, 11–12; and Strawson, 31, 33; and Whitehead, 31, 35, 117–18, 128 constructivism. See social constructivism contemplation, 74–76 correlationism, xxvi, 25, 89, 185, 196; definition of, 193–94; feminist critique of, 200–204; and Kant, xii, 187, 194; object-oriented ontology critique of, 204–5; and pan
psychism, 40. See also Harman, Graham; Meillassoux, Quentin Cow Clicker (Bogost), xxiv, 90–91, 96–98 Crandall, Jordan, 118 “Crosseyed and Painless” (song), xxv, 167, 168, 170, 176, 179, 186; lyrics to, 171–72 Crosseyed and Painless (video), xxv–xxvi, 167, 168, 179–84, 186–89; as aperture, 170; glowing car in, 173–74; as homage to Eno, 176; images from, 169, 174, 190 Darwin, Charles, viii, 24; work inspired by, 223 Darwin, Erasmus: Temple of Nature, 224 data mining, xxv, 111, 115, 133, 134n17; for Gandy, 107–8 da Vinci, Leonardo, 198 de Chirico, Giorgio, 175 De Landa, Manuel, viii, 32, 210–11, 227 Delany, Samuel R., 198–99, 208 Deleuze, Gilles: as different from Whitehead, 41; Francis Bacon, 16n18; and intuition, 49–50; and virtuality, 213 Deleuze and Guattari: affectivity, xvii–xviii; animal/art, theory of, 2, 6, 8, 9–12, 15n8; assemblage, 225, 229, 231; blocks of sensation, 9, 11, 12, 59; body without organs, 212–13; desire, 9–10, 11, 13; matter-movement, 225, 226; plane of consistency, 13, 229; plane of immanence, xx; plane of organization, 12–13; turning toward, xx–xxi democracy: end of, 153–54; and
Internet, 141; of objects, 207, 216 Dennett, Daniel, 27, 31–32 Derrida, Jacques: and disappearance of the origin, 165n51; on end of democracy, 153–54; il n’y a pas de hors-texte, xiv; and logocentrism, 161n3; on modern technologies, 152; pharmakon, 136n25; on the police, 153–54; on sovereign violence, 155; on the undecidable, 149, 159, 160–61 Descartes, René, 21, 31; cogito, 21, 32 DNA, 147, 172, 181 Doane, Mary Ann, 144–45 drone killings, 102, 107, 111, 123 dunamis, 209–10, 211, 213, 216. See also plenum duration, 45–9, 60, 62, 68, 70, 148; and compression, 145; and inner experience, 39; spirituality of, 65–66. See also time Electric Boogaloos, 168, 176 eliminativism, 19, 20, 33; of Dennett, 27, 313–12; of Meillassoux and Brassier, 24. See also speculative realism Enlightenment, 141, 142, 205 Eno, Brian, 173, 176–77, 179, 181–82, 184, 187 epistemology, xi, 28, 216; as actant on materiality, 196; critique of, 195; feminist, 197, 199, 201, 204, 217; and Harman, 39; in literary and cultural theory, 193; and ontology, 25, 35, 38, 172, 196, 202; of uncertainty, 104, 106 Exploit, The (Thacker and Galloway), 91
facts: absolute, 173; in “Crosseyed and Painless” lyrics, 171–72; etymology of, 179; as illusionary, 184, 189–90; and irrelevance, 112; objectifiable, 27, 30; preconceived, 63; and smallism, 33; uncertain, 173; and values, 23; and Whitehead, 39; and Wittgenstein, 27 Faust, 151–52 feminism: and epistemology, 197, 199, 201, 204, 217; internal critique of, 201; and materialisms, 223; against objectification, xviii; and object-oriented ontology, 194–95, 204; and science, 195, 197, 201–3, 214, 217; and socialism, xv. See also correlationism: feminist critique of; new materialism: feminist Feuer, Jane, 144–45 Folds to Infinity (Manning), 52–53, 57–58, 66–67, 78n27; image from, 53 Forsythe, William, 52 Foucault, Michel: dispositif, 210; influence on Barad, 201–2; technique of self, 232 Freud, Sigmund, 170, 180, 181, 199 Frodeman, Robert, 87. See also philosophy: field funk, 170–71, 184, 189 Galloway, Alexander, xxviiin9; and code, 151, 152; Exploit, 91 Game Developers Conference (2010), 90, 97 Gandy, Oscar, Jr., 107–8 global warming. See climate change Godwin’s law, 142
Grosz, Elizabeth, 217–18 Grusin, Richard. See premediation; Premediation Guns, Germs, and Steel (Diamond), 95
social sciences, vii, ix–x, 223; t wenty-first century, xx Husserl, Edmund, 179, 183, 238n22 hyperobjects, xxvii, 194, 231–32. See also objects
Haraway, Donna: and animal studies, viii; and assemblages, 224; “Manifesto for Cyborgs,” xv, xvi; nature-cultures, 196; paradise myth, 200; “The Promise of Monsters,” 197 Harman, Graham: allure of objects, 25, 224–25, 238n17; and broken tools, 177; carpentry of things, 88; critique of Ladyman and Ross, 36–37, 38; Ferris wheel example, 206–7, 208; human access, 22; object- oriented ontology, xxvi, 205–7, 212, 215–16, 226–30, 232–34; repudiation of matter, 207–8; Tool-Being, 203, 205–7; “WellWrought Broken Hammer,” 215–16, 218n1, 222n62, 239n37. See also object-oriented ontology; objects; Speculative Turn, The; withdrawn objects Hartouni, Valerie, 198, 200 Hayles, N. Katherine, 214 Heidegger, Martin: and animals, 21; and Being, 175, 185; Dasein, 183, 185; and phenomenology, 203; and things, 225–27, 238n17; and tools, xxvi, 175–76; vorhanden, 175; and withdrawn objects, 226–27; zuhanden, 175 Herzogenrath, Bernd, 226 Hird, Myra, 195, 202, 206 humanities, xiii–xiv, xvi; digital, 214; linguistic turn in, 203; realism in, 193; and
imminent threats (Bush administration), xxiv, xxv, 101–4, 108–9, 133n3, 134n17; Americans as, 115 inner experience, 22, 26, 29–30, 33, 39. See also mental states; mentality; Wittgenstein, Ludwig: inner sensations instinct, xxi, 1–2, 6–10, 13; Tinbergen’s experiments in, 2–4 interiority. See inner experience Internet: as “critical” machine, 139–44; nonhuman traffic on, xiii–xiv; as predictive, 114; as small world, 147 intuition, xxiii, 60; and art, 47–49, 63–64; and contemplation, 74–75; definition of, 45–46; and Deleuze, 49–50; and habit, 68–69. See also Bergson, Henri; sympathy Irigaray, Luce, 199, 200, 223 James, William: as inspiration to Connolly, 229; and pan psychism, 20; Principles of Psychology, viii, xvi; pure experience, 11; stream of consciousness, 31; terminus, 12 Jones, Judith, 122–23, 136n26 Joyce, James: Finnegans Wake, 235 Kant, Immanuel, 179; and aesthetics, 186–87; and correlationism, xii, 187, 194; and the Enlightenment, 141; on matter, 224,
237n7, 237n8; and the transcendental, 21, 183; and universal communicability, 21 Keenan, Thomas, 159 Kierkegaard, Søren, 159 Kittler, Friedrich, xxv, 155–56, 157 Kleinberg, J., 147 Lacan, Jacques, 172 Ladyman (James) and Ross (Don), 36, 38 Landauer, Rolf, 156 langue, 150, 152. See also computer code: as language; parole Laplace, Pierre-Simon, 118 Lapoujade, David, xxiii, 50, 61, 63 Latour, Bruno: actants, xv, xvi, 202, 208–9, 225, 231; “An Attempt at Writing a Compositionist’s Manifesto,” 205; “horizontalizing” of ontological plane, 230; influence of, xvi, xx, 88; and materialism, 232; Science in Action, xv. See also actor-network theory; assemblage theory Latour Litanizer (Bogost), 88, 209, 221n41 Leibniz, Gottfried Wilhelm von: and panpsychism, xxii, 20, 22, 36 Lessig, Lawrence, 152 Levinas, Emmanuel, 170, 186 Liben-Nowell, D., 147 Lingis, Alphonso, xxiv, 88 logos. See computer code: logos Lovink, Geert, 143, 162n15 Lucretius, 22, 35–36, 223; clinamen, 223; in Western culture, viii, xxviin4. See also atomism Lulu (website), 91
Mackenzie, Adrian, 139, 151 Mahoney, Michael, 150–51 Manning, Erin, 214, 227 Massumi, Brian: and affect theory, xvi–xvii; and the chora, 213–14; and enabling constraints, 52; and thinking-feeling, 73. See also under preemption materialism, 205, 206, 226; contemporary, 232, 233, 237n10; historical, xxvii, 223, 237n10; lava lamp, 205; philosophical, 223; vital, xxvi–xxvii, 208, 223, 225, 227. See also materiality; new materialism materiality, vii, xxiv, 112, 238n22; of affect, 105; and assemblages, 233; and epistemology, 201, 204; and feminist new materialism, 196, 201; of human progress, 95; ideology of, 96; and immateriality, 51; lively, 224; of media, 93; nonhuman, viii, xix; of perception, 54; rejection of, 232–33; scholarly intervention into, 193; and textuality, xvi, 234; and thought, 19–21; of the universe, 218; of the world, 120. See also new materialism matter: active, 32; agency of, xxvi; and feminist new materialism, 196–97, 199–200, 204, 207–8, 210–11, 221n39; and form, 196–97, 199–200, 206, 209; and intuition, 49; lively, 223, 224, 226, 237n7, 237n9; and mind, 20, 22, 32, 189, 213; -movement, 225–26; nonsentient, 33; and object-oriented ontology, 205–6; and relations, 228, 233; and social construction, 204; study of, 214; unformed, 229;
Index vibrant, xxvi, 32, 196, 208, 212, 216, 237n9; written, 89, 99 McDonald’s Videogame (Molleindustria), xxiv, 93–94 McPherson, Tara, 146 Meillassoux, Quentin, 24, 33; After Finitude, 193–94, 200–201, 203. See also correlationism; speculative realism memory: and art, 46; and computers, 140, 147–48, 150, 158–59, 165n57; of the future, 46, 48, 50–51; as mistake, 176, 181–82; Mistaken Memories of Mediaeval Manhattan (Eno), 176, 182; for Stiegler, 137n41 mentality, 20–22, 30–31, 41; for Strawson, 31–32, 34. See also inner experience; mental states mental states, 26, 28–30. See also inner experience; mentality meontic nothing, 181, 185, 189 “Minority Report, The” (Dick), 109 Minority Report (Spielberg), 101, 109–11, 129–32; compared to Person of Interest, 114, 116, 129. See also premediation Mol, Annamarie, 202 Molleindustria: McDonald’s Videogame, xxiv, 93–94; Oligarchy, 94–95 Morton, Timothy, 96, 205, 226–27, 230–31, 234. See also hyperobjects. Nagel, Thomas, 19, 28–29; “What Is It Like to Be a Bat?,” xxii, 25–26, 35 National Counterterrorism Center (NCTC), 101–4, 115 nature, 7, 189; and affect, xvii; and art, 74, 75; and culture, 10, 195;
and feminism, 197, 199–200, 202, 203; and instinct, 8; nature of, 2; nonhuman, xii, xv; and social constructivism, xi, xviii, 197–98 new materialism, viii, 32–33, 219n9, 237n10; as different from correlationism, 202–4; and dunamis, 209–10; feminist, xvii, 193, 195–96, 207–10, 221n39. See also materialism; materiality new media: and crises, xxv, 140, 143–45; technologies, xvi; theory, viii, xv, xvi, 150–51; and user agency, 148 Nietzsche, Friedrich: last men, 50; lightning, 209; will to truth, 232 nihilism, 185–88 Nonhuman Turn conference, ix–x, xii, xv, xxi, xxviiin9, 96, 241 Obama, President Barack, xxiv, xxv, 101 objectification, xxviii, xxv, 167, 174 object-oriented ontology: and anxiety, 168; and correlationism, 193–94, 196, 203–5; as critical method, 215; and form, 196, 204–5, 207; and human/ nonhuman divide, xxvi; and Nonhuman Turn conference, xii–xiii, xxi, xxviiin9; and ontography, 88; and panpsychism, xxii; as sense of wonder, 85; and sentience, 180; and substantialism, 207; and vital materialism, xxvii. See also hyperobjects; objects; relationality; withdrawn objects objects: actual, 213; and affectivity,
xvii, 216; agential status of, xxiv; and art, 45, 49, 51, 57, 60, 63–64, 71, 188; books as, 91–93; for Bryant, 207, 221n39; and chora, 212; choreographic, 52, 53–54, 70; democracy of, 207, 216; and desire, 9; digital media as, xvi; elusive, 227–28; as eternal form, 205, 207; external, 10; for feminist new materialism, 196; ideologies as, 96; independence of, xxvi; intentional, 179, 184; interior of, 49, 61, 212; knowledge of, 38–39; lively, xxvii, 224, 237n9; media, 217; nonhuman, xvii, xviii; ontogenetic account of, xxii; orreries as, 94–95; of perception, 26; and philosophy, 81, 89; randomness of, 208–9; redefining, 167; and relation, 50; relations between, xxvii, 99, 202, 209–10, 228–30, 232; and subjects, 167, 224, 225, 234, 237n9; and sympathy, 1, 2, 12, 61, 72; technical, 117–18; thinking, 186; and things, 233–34; of thought, 230; and Whitehead, 12, 23, 46–47. See also hyper objects; object-oriented ontology; withdrawn objects Oligarchy (Molleindustria), 94–95 OOO. See object-oriented ontology orrery, 94–95 oukontic nothing, 181, 184, 189 outdoing, 48, 59–62, 74–75; and animals, 11 “Panpsychism Proved” (Rucker), xxii, 19, 22 parole, 150, 152, 157. See also computer code: as language; langue
performativity, 118, 188, 197, 208; and computer code, 151–52, 153, 155; “Posthumanist Performativity” (Barad), 201–2 Person of Interest (CBS Television), xxv, 114–16, 128–29, 132 phenomenology, 31; as antirealist, 203; in Buddhist thought, 169; and Husserl, 179, 183; in literary and cultural theory, 193; and materialism, 223; and sincerity, 172. See also Alien Phenomenology philosophy: experimental, 86–87; field, 87–88 Plato, 86, 99, 117, 199; and Aris totle, 196; Phaedrus, 151; Timaeus, 197, 207, 211–12 plenum, 205, 209–10, 211, 212–13, 216. See also dunamis Popper, Karl, 119–20 posthuman, ix–x; “Posthumanist Performativity” (Barad), 201 postmodernism, 172, 184, 185–86, 203, 227 precrime, 101–2, 109–11, 113–14, 130 predictive analytics, 107–8, 113–16, 118–20, 125–26, 129, 133, 134n17; and daily life, 123; and human understanding, xxiv, 110–11 preemption (Bush administration), xxiv–xxv, 108–9, 111; and Brennan testimony, 102–4, 106–7, 109, 113–14, 123; for Massumi, 103–6, 108, 112, 132, 134n14 premediation, xx, xxiv, 109–14, 129–32, 135n20, 135n23 Premediation (Grusin), 109–10, 111–12, 113, 135n20, 135n23
Index Programmed Visions (Chun), 140 propensity, 107, 118, 126, 133; and Popper, 119–20; and Whitehead, 120–21 qualia, 22, 26–27, 30–31, 206–7, 215. See also Wittgenstein, Ludwig: inner sensations racism, xxvi, 173, 183, 197; antiracism, 167; environmental, 170, 174 real time: and computers, 146, 148–49, 214; and crises, 145; definition of, 148; loss of memory in, 148, 165n57; new media, 148; responses, 144; responsibility, 140 Recorded Future (data analytics service), xxv, 123–25, 127–28, 137n38, 137n41 relata, 5, 37, 202 relation: of contrast, 4–6; and creativity, 50–52, 61, 63, 65–66; external, 11; fields of, xxiii, 46–47, 56, 60, 61, 71–74, 78n34; for Harman, 38, 230; and intuition, 45–46, 48–49; between objects, 35–37, 99, 194, 206, 213. See also object-oriented ontology; objects; relationality; withdrawn objects relationality, xxiv, xxvi, 40, 209, 227–29; for humans and nonhumans, xxiii; and interior of object, 212. See also object- oriented ontology; objects; relation; withdrawn objects remediation, xvi, xx, 130, 135n20 representation: of future, 112, 121, 132; of inner sensations, 26; and language, 201; in literary
and cultural theory, x, xvi, 193, 198, 200–201, 217; and media, 213; and memory of the future, 48; in object-oriented ontology, 205; and operationality, 113; and poetry, 234; and probability, 119, 120–21; procedural, 94; second-order, 196 retrovirus, 146–47, 159 RNA, 146, 181 Roberts, Celia, 195 Rucker, Rudy: “Panpsychism Proved,” xxii, 19, 22 Rumsfeld, Donald, 105. See also unknown unknown Ruyer, Raymond, 2, 8, 11, 15n10, 16n18; aesthetic yield, 9–10, 63; auto conduction, 6–7 Sadin, Éric, 117–18 Sanchez, Ana Marie, 168, 179 Schaberg, Christopher: Checking In / Checking Out, 92; Textual Life of Airports, 91–92 Sedgwick, Eve Kosofsky: and hermeneutics, 216; and the sensual, 214; “Shame in the Cybernetic Fold: Reading Silvan Tomkins,” xvi–xvii Serres, Michel, 224 Sharp, Hasana, 209, 210 Shaviro, Steven, 96, 207, 227 Simondon, Gilbert, x, xi, 23 Skeeter Rabbit, 168, 173, 175–76, 183, 186, 189; images of, 169, 190 Skrbina, David, 19–20 social constructivism, 200–204, 208, 227; and nature, xi–xii, xviii, 197–98 social sciences: and humanities, vii, ix–x; and Latour, xv
software, xxv, 139–40; crisis of, 157; definition of, 151–52, 156; free, 150; and hardware, 157. See also code; computer code; source code source code, 140, 150–53, 155–57, 165n57. See also code; computer code; software speculative realism, 20, 32, 33, 36, 193, 226; and correlationism, xii, xxvi, 218n1; and Harman, 36–37; and object-oriented ontology, xii–xiii, xxi, xxii, xxviiin9, 193, 218n1, 226; Speculative Turn, xii, xiii, xxviii; and Strawson, 32–33. See also eliminativism; Meillassoux, Quentin Speculative Turn, The (Bryant, Harman, and Srnicek), xii–xiii, xxviiin9, 203, 204 Spinoza, Baruch: and affect, xvii; conatus, 23, 224, 231; and panpsychism, xxii, 20 Spurr, Samantha, 54, 78n27 Srnicek, Nick, 203. See also Speculative Turn, The state of exception, 149–50, 152, 154–55, 157, 163n30. See also Agamben, Giorgio Stenger, Isabelle, 194–95 Stiegler, Bernard, 117, 137n41 Stitching Time (Manning), xxiii, 52–59, 62, 66–71; images of, 55, 56, 67, 71 Strawson, Galen, xxii, 20, 31–35 subjectivity, 25, 28–29, 64, 228; of animals, 11; and computer code, 151–52; and feminism, 195, 200; and Jones, 122–23; and Strawson, 34, 35; and Whitehead, 35, 117, 122–23,
136n26; and Wittgenstein, 27, 30. See also inner experience; mentality; Wittgenstein, Ludwig: inner sensations subjects. See objects: and subjects Sydney Biennale (2012), xxiii, 52, 57, 69, 78n34 sympathy, xxii, 14; animal, 1, 2, 11, 13, 14; for Bergson, 11, 48–49, 61–65, 72, 74–75. See also Bergson, Henri; intuition Talking Heads, xxv, 167, 168, 186; Remain in Light, 176. See also “Crosseyed and Painless” (song); Crosseyed and Painless (video) Tanz, Jason, 96–98 Tarde, Gabriel, 10–11 television: and liveness and catastrophe, 144–45 Textual Life of Airports, The (Schaberg), 91–92 Thacker, Eugene, 209; Exploit, 91 Thoreau, Henry David, viii, 224, 232 time: art of, 45, 47–52, 59–60, 63, 64–66, 70, 76; event-, 46, 52, 65–66, 70, 73; geological, 185; intensification of, xiii; and intuition, 45; movement of action through, xix–xx; persistence through, 23; plurality of, 49; probability as function of, 134n17; processual force of, 58–59; schematization of, 130; shared, 68, 69; space and, 5, 56, 107, 124, 158, 183, 186; televisual, 145; and Whitehead, 39–40. See also duration; real time; Stitching Time Tinbergen, Niko, xxi, 2–7
Index Tomkins, Sylvan, xvi–xvii Tool-Being (Harman), 203, 205–7 turn: to affect, xvii; defined, xix–xxi; fatigue, xix; linguistic, 86, 203; posthuman, ix; representational, x, xvi; and tropos, 197; Whiteheadian, 127. See also Speculative Turn, The Twitter revolutions, 142 Uexküll, Jakob von, 234; umwelt, 228 uncanny, 49, 168, 176, 185, 188; agency, 187, 225; Freud’s essay on, 170; melancholic, 175 unknowability, 104, 107, 112, 132, 134n17; and language, 157 unknown unknown, xxiv, 104–6, 108, 114 Verhaege, Ria: Living with Cuddles, 75, 76 Vernes, Jean-Rene, 118–19 Vibrant Matter (Bennett), xxvi, 208–9 video games, xxiv, 94, 111–12, 149, 153; social, 90–91, 97; traditional, 90. See also Cow Clicker; Molleindustria Vidicon camera tube, 178; diagram of, 179 Vurdubakis, Theo, 139 Wark, McKenzie, xxviin9, 143 Weizenbaum, Joseph, 153, 156 “Well-Wrought Broken Hammer” (Harman), 215–16, 218n1, 222n62, 239n37 Whitehead, Alfred North: affectivity, xvi, xvii; atomism, 22, 37, 74; concern, 62; experience,
30–31, 35, 46–47, 120, 136n33; feeling, 41, 46, 72–73; matter, 32, 205; novelty, 23, 33–34, 61, 64, 121, 229, 233; objects, 12, 16n27, 23, 46–47; ontological power, 125–27; ontological principle of, 33–34; ontology of probability, 123, 125–26; and panpsychism, xxii, 20; perception, 35, 116–17; process, 35; propensity, 120–21; public vs. private, 37–40; real potentiality, 108–9, 120–23, 126–28, 132–33, 136n26, 137n44; superject, 11, 73, 117, 121–23, 125, 136n26, 137n44; value, 22–24; vectors, 72–74 Whitman, Walt, viii, 224, 234 WikiLeaks, 142 Williams tube, 158 Winogrand, Garry, 98–99 Wired (magazine), 96–98 withdrawn objects, 12, 87, 194, 225–27, 232; from access, xxii, 38–39, 180–81; as coy, 226–27, 231; and Heidegger, 226–27; and hiddenness, 173, 182, 188, 229, 230, 233; from relations, xxvii, 40–41, 205–7. See also hyperobjects; object-oriented ontology; objects Wittgenstein, Ludwig, 23; inner sensations, xxii, 26–28, 30 Wolfendale, Pete, 21, 24 writing: academic, 85–86, 93; and authorship, 93, 99 Yakich, Mark: Checking In / Checking Out, 92 Zynga, 90, 97
View more...
Copyright ©2017 KUPDF Inc. |
d0eae701efa70f09 | All Issues
Volume 40, 2020
Volume 39, 2019
Volume 38, 2018
Volume 37, 2017
Volume 36, 2016
Volume 35, 2015
Volume 34, 2014
Volume 33, 2013
Volume 32, 2012
Volume 31, 2011
Volume 30, 2011
Volume 29, 2011
Volume 28, 2010
Volume 27, 2010
Volume 26, 2010
Volume 25, 2009
Volume 24, 2009
Volume 23, 2009
Volume 22, 2008
Volume 21, 2008
Volume 20, 2008
Volume 19, 2007
Volume 18, 2007
Volume 17, 2007
Volume 16, 2006
Volume 15, 2006
Volume 14, 2006
Volume 13, 2005
Volume 12, 2005
Volume 11, 2004
Volume 10, 2004
Volume 9, 2003
Volume 8, 2002
Volume 7, 2001
Volume 6, 2000
Volume 5, 1999
Volume 4, 1998
Volume 3, 1997
Volume 2, 1996
Volume 1, 1995
Discrete & Continuous Dynamical Systems - A
November 2003 , Volume 9 , Issue 6
Select all articles
Sustainable dynamical systems
John Erik Fornæss
2003, 9(6): 1361-1386 doi: 10.3934/dcds.2003.9.1361 +[Abstract](1402) +[PDF](260.2KB)
In this paper we investigate randomly perturbed orbits. If a dynamical system is hyperbolic one can keep random perturbations from accumulating into large deviations by making small corrections. We study the converse problem. This leads naturally to the notion of sustainable orbits.
Dispersive estimate for the wave equation with the inverse-square potential
Fabrice Planchon, John G. Stalker and A. Shadi Tahvildar-Zadeh
2003, 9(6): 1387-1400 doi: 10.3934/dcds.2003.9.1387 +[Abstract](2086) +[PDF](191.8KB)
We prove that spherically symmetric solutions of the Cauchy problem for the linear wave equation with the inverse-square potential satisfy a modified dispersive inequality that bounds the $L^\infty$ norm of the solution in terms of certain Besov norms of the data, with a factor that decays in $t$ for positive potentials. When the potential is negative we show that the decay is split between $t$ and $r$, and the estimate blows up at $r=0$. We also provide a counterexample showing that the use of Besov norms in dispersive inequalities for the wave equation are in general unavoidable.
Diophantine conditions in small divisors and transcendental number theory
E. Muñoz Garcia and R. Pérez-Marco
2003, 9(6): 1401-1409 doi: 10.3934/dcds.2003.9.1401 +[Abstract](1616) +[PDF](138.9KB)
We present analogies between Diophantine conditions appearing in the theory of Small Divisors and classical Transcendental Number Theory. Let K be a number field. Using Bertrand's postulate, we give a simple proof that $e$ is transcendental over Liouville fields K$(\theta)$ where $\theta $ is a Liouville number with explicit very good rational approximations. The result extends to any Liouville field K$(\Theta )$ generated by a family $\Theta$ of Liouville numbers satisfying a Diophantine condition (the transcendence degree can be uncountable). This Diophantine condition is similar to the one appearing in Moser's theorem of simultanneous linearization of commuting holomorphic germs.
Noncommutative dynamical systems with two generators and their applications in analysis
Boris Paneah
2003, 9(6): 1411-1422 doi: 10.3934/dcds.2003.9.1411 +[Abstract](1804) +[PDF](189.6KB)
In this paper, some new dynamical systems which are determined by a semigroup $\Phi$ of maps in a closed interval $I$ are studied.The main peculiarity of these systems is that $\Phi$ is generated by two noncommuting maps. Introducing certain closed subsets $\mathcal T_1$ and $\mathcal T_2$ in $I$ makes it possible to determine some specific orbits corresponding to $\Phi$ and some specific attractors in $I$. These orbits play a crucial role in solving a wide variety problems in such diverse fields of analysis as functional and functional-integral equations, integral geometry, boundary problems for hyperbolic partial differential equations of higher $(>2)$ order. In the first part of this work we describe some conditions which ensure the existence of attractors in question of a special structure. In the second part several new problems in the above-mentioned fields of analysis are formulated, and we trace how the above dynamic approach works in solving this problems.
Uniform Bernoulli measure in dynamics of permutative cellular automata with algebraic local rules
Bernard Host, Alejandro Maass and Servet Martínez
2003, 9(6): 1423-1446 doi: 10.3934/dcds.2003.9.1423 +[Abstract](1940) +[PDF](280.2KB)
In this paper we study the role of uniform Bernoulli measure in the dynamics of cellular automata of algebraic origin.
First we show a representation result for classes of permutative cellular automata: those with associative type local rule are the product of a group cellular automaton with a translation map, and if they satisfy a scaling condition, they are the product of an affine cellular automaton (the alphabet is an Abelian group) with a translation map.
For cellular automata of this type with an Abelian factor group, and starting from a translation invariant probability measure with complete connections and summable decay, it is shown that the Cesàro mean of the iteration of this measure by the cellular automaton converges to the product of the uniform Bernoulli measure with a shift invariant measure.
Finally, the following characterization is shown for affine cellular automaton whose alphabet is a group of prime order: the uniform Bernoulli measure is the unique invariant probability measure which has positive entropy for the automaton, and is either ergodic for the shift or ergodic for the $\mathbb Z^2$-action induced by the shift and the automaton, together with a condition on the rational eigenvalues of the automaton.
Dimension of Markov towers for non uniformly expanding one-dimensional systems
Fernando J. Sánchez-Salas
2003, 9(6): 1447-1464 doi: 10.3934/dcds.2003.9.1447 +[Abstract](1609) +[PDF](233.7KB)
We prove that a non uniformly expanding one-dimensional system defined by an interval map with an ergodic non atomic Borel probability $\mu$ with positive Lyapunov exponent can be reduced to a Markov tower with good fractal geometrical properties. As a consequence we approximate $\mu$ by ergodic measures supported on hyperbolic Cantor sets of arbitrarily large dimension.
Heteroclinic foliation, global oscillations for the Nicholson-Bailey model and delay of stability loss
Sze-Bi Hsu, Ming-Chia Li, Weishi Liu and Mikhail Malkin
2003, 9(6): 1465-1492 doi: 10.3934/dcds.2003.9.1465 +[Abstract](1565) +[PDF](636.8KB)
This paper is concerned with the classical Nicholson-Bailey model [15] defined by $f_\lambda(x,y)=(y(1-e^{-x}), \lambda y e^{-x})$. We show that for $\lambda=1$ a heteroclinic foliation exists and for $\lambda>1$ global strict oscillations take place. The important phenomenon of delay of stability loss is established for a general class of discrete dynamical systems, and it is applied to the study of nonexistence of periodic orbits for the Nicholson-Bailey model.
Global bifurcation of homoclinic solutions of Hamiltonian systems
S. Secchi and C. A. Stuart
2003, 9(6): 1493-1518 doi: 10.3934/dcds.2003.9.1493 +[Abstract](1839) +[PDF](258.6KB)
The main results give hypotheses ensuring that a non-autonomous first order Hamiltonian system has a global branch of homoclinic solutions bifurcating from an eigenvalue of odd multiplicity of the linearization. The system is required to be asymptotically periodic (as time goes to plus and minus infinity) and these limit problems should have no homoclinic solutions. Furthermore, the asymptotic limits of the linearization should have no characteristic multipliers on the unit circle. The proof uses the topological degree for proper Fredholm maps of index zero.
Global existence and blow-up of solutions to a nonlocal reaction-diffusion system
Shu-Xiang Huang, Fu-Cai Li and Chun-Hong Xie
2003, 9(6): 1519-1532 doi: 10.3934/dcds.2003.9.1519 +[Abstract](2300) +[PDF](172.7KB)
This paper deals with a reaction-diffusion system with nonlocal sources. Under appropriate hypotheses, we obtain that the solution either exists globally or blows up in finite time by making use of super and sub solution techniques. In the situation when the solution blows up in finite time, we show that the blow-up set is the whole domain, which is quite different from the results with local sources. Furthermore, we obtain the blow-up rate of the solution.
Symbolic analysis for some planar piecewise linear maps
Peter Ashwin and Xin-Chu Fu
2003, 9(6): 1533-1548 doi: 10.3934/dcds.2003.9.1533 +[Abstract](1650) +[PDF](183.7KB)
In this paper a class of linear maps on the 2-torus and some planar piecewise isometries are discussed. For these discontinuous maps, by introducing codings underlying the map operations, symbolic descriptions of the dynamics and admissibility conditions for itineraries are given, and explicit expressions in terms of the codings for periodic points are presented.
Time optimal problems with Dirichlet boundary controls
N. Arada and J.-P. Raymond
2003, 9(6): 1549-1570 doi: 10.3934/dcds.2003.9.1549 +[Abstract](1645) +[PDF](260.4KB)
We consider time optimal control problems governed by semilinear parabolic equations with Dirichlet boundary controls in the presence of a target state constraint. To establish optimality conditions for the terminal time $T$, we define a new Hamiltonian functional. Due to regularity results for the state and the adjoint state variables, this Hamiltonian belongs to $L_{l o c}^r(0,T)$ for some $r>1$. By proving that it satisfies a differential equation corresponding to an optimality condition for $T$, we deduce that it belongs to $W^{1,1}(0,T)$. This result answers to the question: how to define Hamiltonian functionals for infinite dimensional problems with variable endpoints (see [10], p. 282 and p. 595).
Modified wave operators for the coupled wave-Schrödinger equations in three space dimensions
Akihiro Shimomura
2003, 9(6): 1571-1586 doi: 10.3934/dcds.2003.9.1571 +[Abstract](1446) +[PDF](194.9KB)
We study the scattering theory for the coupled Wave-Schrödinger equation with the Yukawa type interaction, which is certain quadratic interaction, in three space dimensions. This equation belongs to the borderline between the short range case and the long range one. We construct modified wave operators for that equation for small scattered states with no restriction on the support of the Fourier transform of them.
A viscous approximation for a multidimensional unsteady Euler flow: Existence theorem for potential flow
Gui-Qiang Chen and Bo Su
2003, 9(6): 1587-1606 doi: 10.3934/dcds.2003.9.1587 +[Abstract](1536) +[PDF](222.2KB)
We study a nonlinear system of partial differential equations that is a viscous approximation for a multidimensional unsteady Euler potential flow governed by the conservation of mass and the Bernoulli law. The system consists of a transport equation for the density and the viscous nonhomogeneous Hamilton-Jacobi equation for the velocity potential. We establish the existence and regularity of global solutions for the nonlinear system with arbitrarily large periodic initial data. We also prove that the density in our global solutions has a positive lower bound, that is, our solutions always stay away from the vacuum, as long as the initial density has a positive lower bound.
Global and local complexity in weakly chaotic dynamical systems
Stefano Galatolo
2003, 9(6): 1607-1624 doi: 10.3934/dcds.2003.9.1607 +[Abstract](1661) +[PDF](207.2KB)
The generalized complexity of an orbit of a dynamical system is defined by the asymptotic behavior of the information that is necessary to describe $n$ steps of the orbit as $n$ increases. This local complexity indicator is also invariant up to topological conjugation and is suited for the study of $0$-entropy dynamical systems. First, we state a criterion to find systems with "non trivial" orbit complexity. Then, we consider also a global indicator of the complexity of the system. This global indicator generalizes the topological entropy, having non trivial values for systems were the number of essentially different orbits increases less than exponentially. Then we prove that if the system is constructive ( if the map can be defined up to any given accuracy by some algorithm) the orbit complexity is everywhere less or equal than the generalized topological entropy. Conversely there are compact non constructive examples where the inequality is reversed, suggesting that the notion of constructive map comes out naturally in this kind of complexity questions.
Global stability for damped Timoshenko systems
J.E. Muñoz Rivera and Reinhard Racke
2003, 9(6): 1625-1639 doi: 10.3934/dcds.2003.9.1625 +[Abstract](2909) +[PDF](167.1KB)
We consider a nonlinear Timoshenko system as an initial-boundary value problem in a one-dimensional bounded domain. The system has a dissipative mechanism through frictional damping being present only in the equation for the rotation angle. We first give an alternative proof for a sufficient and necessary condition for exponential stability for the linear case. Polynomial stability is proved in general. The global existence of small, smooth solutions and the exponential stability is investigated for the nonlinear case.
2018 Impact Factor: 1.143
Email Alert
[Back to Top] |
1d5ba669b201c2d4 | Quantum Matter Animated!
by Jorge Cham
What does it mean for something to be Quantum? I have to confess, I don’t know. My Ph.D was in Robotics and Kinematics, so my neurons are deeply trained to think in terms of classical dynamics. I asked my siblings (two engineers and one architect) what comes to mind for them when they hear the word Quantum, what they remember from college physics, and here is what they said:
– “Quantum Leap!” (the late 80’s TV show)
– “Quantum of Solace!” (the James Bond movie which, incidentally, was filmed in my home country of Panama, even though the movie was set in Bolivia)
– “I don’t remember anything I learned in college”
– “Light acting as a particle instead of a wave?”
The third answer came from my sister, who went to MIT. The fourth came from my brother, who went to Stanford (+1 point for Stanford!).
Screen Shot 2013-06-11 at 12.15.21 AM
I also asked my spouse what comes to mind for her. She said, “Quantum Computing: it’s the next big advance in computers. Transistors the size of atoms.” Clearly, I married someone smarter than me (she also went to Stanford). When I asked if she knew how they worked, she said, “I don’t know how it works.” She also said, “Quantum is related to how time moves more slowly as you approach the speed of light, right?” Nice try, but that’s Relativity (-1 point for Stanford!).
I think the word Quantum has a special power in our collective consciousness. It’s used to convey science-iness, technology, the weirdness of the Physical world. If you Google “Quantum”, most of the top hits are for technology companies that have nothing to do with Quantum Physics (including Quantum Fishing Tackles. I suppose that half the time, you pull up a dead fish).
It’s one of those words that a lot of people have heard of, but very few really understand what it means. Which is why I was excited when Spiros Michalakis and IQIM approached me to produce a series of animations that explore and explain Quantum Information and Matter. Like my previous videos (The Higgs Boson, Dark Matter, Exoplanets), I’d have the chance to interview experts in this field and use their expertise and their voices to learn and to help others learn what amazing things lie just around the corner, beyond our classical understanding of the Universe.
Screen Shot 2013-06-11 at 12.16.55 AM
This will be a big Leap for me (I’m trying to avoid the obvious pun), and a journey of exploration. The first installment goes live today, and you can watch it below. Like Schrödinger’s box, I don’t know what we’ll discover with these videos, but I know there are exciting possibilities inside. This is also going to be a BIG challenge. Understanding and putting Quantum concepts in visual form will be hard. I mean, Hair-pulling hard. Fortunately, I’ve discovered there’s a remedy for that.
Screen Shot 2013-06-11 at 12.17.20 AM
Watch the first installment of this series:
Jorge Cham is the creator of Piled Higher and Deeper (www.phdcomics.com).
Featuring: Amir Safavi-Naeini and Oskar Painter http://copilot.caltech.edu/
Produced in Partnership with the Institute for Quantum Information and Matter (http://iqim.caltech.edu) at Caltech with funding provided by the National Science Foundation.
Transcription: Noel Dilworth
Thanks to: Spiros Michalakis, John Preskill and Bert Painter
65 thoughts on “Quantum Matter Animated!
1. Is it still not possible that the laser gave some part of it’s energy to the mirror? Is it possible to detect such small instantaneous rise in temperature (which will be dispersed to the surroundings within fraction of a second as it is maintained at 0K) ?? Because if it is not completely possible to measure such small changes in temperature in such small time then how can we be sure that the red shifted laser is NOT due to the laser giving off it’s energy ?? And if this is the reason then this still does not prove that the mirror was vibrating. It started vibrating only after being hit by the laser. But due to temperature dispersal the mirror was instantly damped and brought again to zero vibrations or ground state.
• I am not a physicist, but the intuitive answer to your question is that if the laser were imparting energy to the mirror, and that was where the red shift was coming from, then there would still be a corresponding blue shift.
• Right. When the oscillator is in its quantum ground state, it can absorb energy but cannot emit energy because it is already in its lowest possible energy state. Reflected light can be shifted toward the red (have lower energy than the incident light, because the oscillator absorbed some of the incident energy), but cannot be shifted toward the blue (have higher energy than the incident light). That’s what the experiment found.
• Just a follow-on to John’s response…
The inability of the mechanical resonator to give up energy when it is in its lowest energy state seems like an obvious statement (by definition of “lowest energy state”), and so why is the experiment interesting then? All it did was confirm that indeed this energy emission goes away as the object gets colder and colder and approaches its ground (lowest energy) state.
It is really the fact that the mechanical resonator can absorb energy when it is in the ground state that is interesting. The classical description of the motion of a mechanical object has no way of allowing for this asymmetry in the emission and absorption of energy with the environment; the processes must be symmetric and zero when the object is not moving at temperature=0K. Think of it from the stand-point that the mechanical object isn’t moving when in its classical ground state, and thus it is not doing work on its environment and the environment is not doing work on it.
That is what makes the quantum description of the ground state of motion interesting; it allows for the asymmetry in the process of emission and absorption of energy by the mechanical resonator to (or from) the environment. I like to make the analogy to the spontaneous emission of light from an atom, in which there is no corresponding spontaneous absorption process of light. A well defined “mode” (think of it as a particular direction and polarization) of light can be described by a similar set of quantum equations as that describing the mechanical resonator, and thus also has a ground state with intrinsic fluctuations. These “zero-point fluctuations” or “vacuum fluctuations” can be thought of as triggers for atomic spontaneous decay and emission of light by the atom, but do not cause the reverse process of spontaneous excitation of the atom.
[Aside: This used to really mystify me when I first learned about spontaneous (and the related stimulated) emission of atoms. The excellent little book by Allen and Eberly,
does a nice job of de-mystifying the vacuum fluctuations.]
A nice description of the above argument is also given in Aash Clerk’s Physics Viewpoint accompanying article:
• Hi Oskar, John, and Paras:
0. For some odd reason, while fast browsing, I first read Oskar’s reply, and then John’s, and only after both, Paras’ original question. (Oskar’s was the longest and innermost indented reply, and so it sort of first caught the eye in the initial rapid browsing.) Even before going through your respective replies, I had happened to think of what in many ways is the same point as Paras has tried to point out above. … Ok. Let me put the query the way I thought of.
1. Here is a simple model of the above experimental arrangement, simplified very highly, just for the sake of argument.
The system here consists of the mechanical oscillator and the light field.
The environment consists of the light source, the optical measurements devices, the cooling devices, and then, you know, the lab, the earth, all galaxies in the universe, the dark matter, the dark energy … you get the idea. The environment also includes the mechanical support of the oscillator, which in turn, is connected to the lab, the earth, etc.
*Only* the system is cooled to 0 K. [Absolutely! 😉 Absolutely, only the system is cooled “to” “0” K!!]
The measurement consists of only one effect produced by the light-to-the mechanical oscillator interaction: the changes effected to the reflected light.
This effect, it is experimentally found, indeed is in accordance with the QM predictions. (BTW, in fact, the experiment is much more wonderful than that: it should be valuable in studying the classical-to-QM transitions as well. But that’s just an aside as far this discussion goes.)
2. Now my question is this: what state of |ignorance> + |stupidity> + |insanity> + |sinfulness> [+ etc…] do I enter into, if I forward the following argument:
At “0” K, the system gets into such a quantum mode that as far as the *reflection* of the light is concerned, if “I” is the amount of the incident light energy (say per unit time), then only some part of it (i.e. the red-shifted part of it) is found to be reflected.
However, there still remains an “I – R” amount of energy that the system gives back to the environment via some *experimentally* unmeasured means. If it doesn’t, the first law of thermodynamics would get violated.
We may wonder, what could be the form taken by such an energy leakage? Given the bare simplicity of the above abstract description as to what the system and environment here respectively consist of, the answer has to be: via some mechanical oscillation modes of the mechanical oscillator that we do not take account of (let alone measure) in this experiment. The leakage would affect the mechanical support of the oscillator, which, please note, lies *outside* of the system.
[The oscillation modes inside the system may be taken to be quantum-mechanical ones; outside of it, as classical ones. But I won’t enter into debates of where the boundary between the quantum and the classical is to be drawn, etc. As far as this experiment—and this argument—goes, we know that “inside” the system, it has to be treated QMechanically; outside, it’s classical mechanically; and that’s enough, as far as I am concerned!]
Since the system here is not a passive device but is *actively* being kept cooled down “to” “0” K, it means: it’s the “freeze” sitting in the environment, not to mention the earth and the rest of the universe, which absorb these leaked out vibrations of the mechanical oscillator. The missing energy corresponds to *this* leakage.
3. Of course, I recognize that my point is subtly different from Paras’. His write-up seems to suggest as if there is an otherwise classically rigid-body oscillator sitting standstill, which begins to vibrate only after being hit by the laser. In contrast, I don’t have this description in mind. He also seems to think rather in terms of a *transient* damping out of the mechanical oscillations. Though I do not rule out transients in the system, that wouldn’t be the simplest model one might suggest here: I would rather think of the situation as if there were a more or less “steady-state” leakage of the missing energy into the environment. Yet, Paras does seem to appreciate the role of the environment—the unmeasured side-effects, so to speak, that the system produces on the environment.
4. Anyway, I would appreciate it if you could kindly let me know in what final state should I collapse: |ignorance> or |stupidity> or … . And, why 🙂
[BTW, by formal training, I am just an engineer. And, sorry if my reply is too verbose and had too many diversions…]
Thanks in advance.
• About your parts (2) and (3), I think it is easier to think of it this way:
At the low temperatures the system is subjected to (I really don’t think it even makes sense to say that “only the system is cooled down to 0K”; instead, just say that the system is cooled down to low temperatures is enough), a lot of the system’s constituent particles are in their ground states.
What is happening in this experiment, is that they are observing that absorption and excitation of constituent particles up from ground states is observed without the corresponding “classical” de-exciting reflection wave that you normally get. This is predicted from the quantum physics.
The special thing about this experiment, though, is that they are also saying that the entire system itself, a macroscopic body, has a quantum wavefunction just like their microscopic parts. That is the part that is interesting and worth reporting upon. Because, if a macroscopic body has a quantum wavefunction, then it can also do all the rest of the quantum weirdness, and that applies to us humans, the Earth, being able to, say, perform quantum tunnelling.
Once you see the experiment in this way, it is then obvious that the loss of energy that you perceive, is merely the spontaneous emission of light by the excited particles, and, in this way, they drop back into the ground state of the entire system. This is important, because spontaneous emission is basically undetectable in our case, which is what the experiment observed. The point is that, classically, you are supposed to observe substantial energetic reflection (along with the spontaneous emission that you cannot remove), and you do not observe that in this experiment.
2. Could you add a link to the paper about the experiment for those readers who want more details about it?
3. Whenever someone asks me for a book to explain quantum mechanics to laymen, I always point them to this:
It’s an illustrated book about the history of quantum mechanics created by Japanese translation students studying english. They chose the topic because they needed to be able to accurately translate relatively technical material. It’s wonderful for answering the questions you raise in the post above.
4. Great video describing a really interesting experiment. However, it is far from reaching the important lessons from Quantum Mechanics that have shaped the way we see the universe.
Forget Quantum Computing. I am not saying that Quantum Computing is not sexy or something, but it is not where the paradigm shift is. One of the greatest thing that a Physics undergraduate degree forces you through, is to learn about Condensed Matter Physics.
You might think that, in contemporary Physics education, they would certainly teach you both Quantum Mechanics, and General Relativity. After all, they are what we call the new world view, that revolutionised how we as a species see ourselves.
The truth, however, is that, if I did not force them to teach me, they would have ignored General Relativity and just taught me Quantum Mechanics. Lots of it. Without motivation. It is only at the end of the Physics degree do you get to see why it is arranged in the way it is.
Special Relativity, the one that Einstein published in 1905, is a really easy thing. Yes, it is bizarre, but you can easily teach it, and later on, you can tell students to apply what they have learnt. That it is reducible into small equations that are easy to memorise, is another plus point. General Relativity, on the other hand, is a pain to teach — everybody, mathematician or physicist, would be confused by the initial arguments, the mathematical notation and all that jazz, until you have completed the module. And even after that, some people just never get it (although, luckily, it is simple enough that a large chunk of people actually understand it very fully, to the contrary of Eddington’s bad joke).
The deal breaker, however, is that the ideas from General Relativity, although a nice help to the other parts of Physics, is very far away from essential. i.e. People can make do without any knowledge of that, and still contribute to the rest of Physics in a proper way. That is not the same with Quantum Mechanics.
The standard way they teach Quantum Mechanics these days, is to throw the mathematics at you, right at the start. Just write down your energy equation (that you can remember from high school), do your canonical quantisation (which is nothing other than replacing symbols you know about with derivative signs; a monkey can do that), and tack on something magical that we call the wavefunction, and Viola, Quantum Goodness!
Since there is nothing to actually understand about it, I watched in amusement as everybody around me struggled to understand something out of nothing, congratulating myself for actually knowing the meaninglessness of it all.
Boy, what do I know?
The next module, aptly named “Atomic and Molecular Physics”, looked like nothing but applications of the mathematics learnt. It was HORRIBLE to go through, especially since it looked like vocational training — approximation and other calculational techniques that are hardly useful outside higher and higher corrections to the properties of materials that classical physics could have found out about (except quantisation, of course). It was important to have learnt it (not least because it was the first place in which Quantum Entanglement was taught), but it felt like we are just learning tricks instead of ideas.
Statistical Thermodynamics was better. Building upon Thermal Physics in first year, there was a bit of Quantum effects being shown in action, especially the Quantum Degeneracy pressure that keeps stars the size they are.
Then BOOM! Condensed Matter Physics (I learnt it under the older name, Solid State Physics). I had to completely rewrite what I thought I had known about Quantum Theory, for it is obvious I knew nothing.
I am sure you guys have heard of the adage: “When stuff are moving fast, are large, or heavy, General Relativity cannot be neglected. When things are small, Quantum corrections cannot be neglected.” It is still true, but there is a sleight of hand here — we have yet to define what it means to be “large” or “small”.
In particular, whenever you have a lot of material squeezed into a small space, i.e. high density, it is small. Thus, something can be both large and small at the same time, requiring both General Relativity and Quantum Theory to describe. A black hole is one such object.
The name “Condensed Matter”, is a really good one. Any liquid or solid, really, is condensed, so condensed, that actually, it is no longer a classical system — the quantum effects DOMINATE. Without incorporating Quantum Theory right into the heart of it all, nothing you calculate even makes sense. And since our first approximations here beat the best classical calculations left-right-centre, there was also no reason to teach the classical approximation techniques either.
Specifically, notice how, in high school, people teach you that heat and sound are just atoms moving about in different ways? Classical theories can talk about heat propagation and sound propagation and motion. But they are three different islands that don’t even make sense together. So different, that even their mathematical tools are different. But in Quantum Theory, the same mathematics describe all three as one united whole, on the zeroth approximation, and even give you dispersion, which is something classical theories cannot explain without complicated methods.
After being floored by how it actually is done, the icing on the cake is Transistors. The theory was originally made in order to explain how metals behave, and we talk about a free electron gas, to explain how metals conduct so well. So, it came as a complete shock that any improvement, notice, ANY SIMPLE improvement to the free electron picture, be it Nearly Free Electron model, or the Tight Binding model, energy bands appear. In practical terms, the theory that sought to explain metals, now explains insulators, and even more, predicts the existence of this previously unheard of class of materials, known as semiconductors.
Indeed, it does even better. It predicts the existence, how to make them, and how they would be useful. It is the first time that Physics THEORY had been faster and earlier than the experimenters at any topic.
So, yeah, while you are enjoying your computers reading this piece, appreciate the sheer ingenuity and wonder that is brought to you by the Quantum revolution.
Please alert Jorge to this. He can do wonders with information.
Sound propagation and bulk motion can be treated the same way, because they are both forms of characteristic wave propagation and show up as eigenvalues of the same equation set. Heat transfer, viscous momentum transfer, and diffusive mass transfer all work basically the same way, because they are closely related effects of the same basic process. All of this can be derived in a unified framework using the principles of classical kinetic theory, because all of it is inherent in the Boltzmann equation.
It’s true that you need quantized internal energy states to accurately predict something as simple as the temperature dependence of specific heat in a gas. But it seems to me that you are somewhat exaggerating the shortcomings of classical physics.
I am really doubtful of that. The reason is that the mathematical apparatus is just not the same. For the propagation of heat, you have the heat equation in classical physics, with the propagation constant kappa. For sound, the Harmonic approximation gives rise to a fixed speed of sound, which you later improve upon by adding anharmonic terms so that the speed of sound becomes a variable.
Those two constants are not the same. Granted, they are dimensionally inconsistent, but the fact is that you have to treat them rather differently. The reason for this discrepancy is that sound propagation exhibits higher frequency dependency, so that it is easier to look at one frequency. Heat, on the other hand, is usually averaged over in the context of classical heat propagation. This makes it really complicated, as you have to average over both spacetime and weight them according to the probabilities of being in so-and-so states. Note that this last thing is also itself temperature dependent, so classical physics is crazy.
Nothing stops a person from combining the heat and sound contributions in classical physics, but they are like Frankenstein combinations — oh, this contribution is for heat, and that for sound, and this for their interaction. That is very different from truly unified descriptions in Quantum theory, where it is one term, and one term only, that we are looking at.
Because of that, I do not think I am exaggerating the shortcomings of classical physics. It simply is not a unified framework, although it is frequently possible to push approximations in classical physics to really high orders of accuracy. That, I can give, but not unification.
And even then, one should notice the tremendous difference in the mathematical methods involved. Yes, both approaches would heavily depend on Fourier analysis, but that is just about their similarities. Instead, a knowledge of the approximation techniques in classical physics is only useful for the continuum free-space approximation of the transport of various quantum objects, whereas proper quantum approximation techniques is frequently simpler than the classical counterparts
Finally, bulk motion is very different from either of sound nor heat in any case, except the fact that they are all of zero frequency (actually, this is how the normal mode mathematical technique announces its own failure, and there are ways to compensate formally). Luckily, it is seldom a problem that this is happening — after all, bulk motion would, somewhat, be better treated with relativistic methods.
• I suspect we’re talking past one another a bit here.
I’m a fluid dynamicist. I’ve studied some advanced solid mechanics and continuum mechanics, but mostly I’m a fluid dynamicist. When you say stuff like “bulk motion is very different from sound”, I think of the underlying physical principles, because in the derived practical equations I use this is not true. But when you say stuff like “the heat equation in classical physics, with the propagation constant kappa”, I think of the phrase “toy equation”. Even in the engineering form of the heat equation, or the Navier-Stokes equations for a linear isotropic fluid, kappa is a coefficient, not a constant (though turbulence modellers generally ignore its thermodynamic derivatives). And it doesn’t show up at all in the Boltzmann equation, unless you do the math and derive it.
Regarding “unified framework”, I expressed that poorly. Sure, in the engineering equations, first-order fluxes like acoustic propagation and bulk motion are handled differently than second-order fluxes like heat transfer and viscous stress. This is because their behaviours are different, so the simplest reasonably accurate mathematical descriptions of them will unavoidably be different. But it should never be forgotten that they can both be derived from the same statistical mechanical representation.
It strikes me that what the Boltzmann equation is to fluid mechanics is somewhat analogous to what Schrödinger’s equation is to quantum condensed matter physics (though it isn’t quite as fundamental). The general form isn’t very useful by itself, but specializations and approximations can produce good enough results to translate into engineering equations. The key to the Boltzmann equation (assuming you have enough dimensions to describe all important degrees of freedom) is the collision operator, which could be said to be analogous to the Hamiltonian in the Schrödinger equation. The collision operator describes all interactions between particles and is very difficult to specify exactly for real physical systems, though a number of popular approximations exist. I gather this is a bit different from the quantum-mechanical approach you’re talking about, where a lot of condensed systems can be described surprisingly well with “noninteracting” approaches…
People have tried to use the Boltzmann equation (with or without quantum effects) to model solids, with mixed success. It seems to be best at fluids, especially gases and plasmas, perhaps because the molecular chaos assumption is difficult to remove.
Look, I’m not claiming that quantum physics is no better than classical physics. But you seem to be saying “classical physics” when you should be saying “classical engineering approximations”, and then drawing conclusions based on the conflation of the two. Comparing the Schrödinger equation to something like the Navier-Stokes equations, never mind the heat equation, is apples-to-oranges. You can actually derive all of the basic principles of fluid mechanics from Newtonian mechanics, without even referencing electromagnetics, though your accuracy won’t be very good…
I shouldn’t have gotten involved. I have a segfault to chase down…
• I better see where you are coming from. You are clearly talking about deeper stuff, and good luck with your segfault.
However, I do not think that your argument is convincing enough. Yes, it is possible to derive fluid equations and so forth from Newtonian mechanics. The problem still persists, however, that after the derivation (in which kappa turns out to be a derived quantity and actually not a constant), that the treatment of heat and sound needs to be done as stitched patchworks on top of the same fundamentals.
As you rightly noted, I was saying that you don’t treat it that way in quantum physics, and it is quite important to see how it is actually handled differently.
Also, the “proper” way to deal with interacting quantum systems is to couple them. For example, phonons and photons, by interacting, means that a proper treatment is to deal with waves that are half-phonon and half-photon and then quantise them yet again. This is completely different from how classical approaches tackle these problems.
Yet again, I have to reiterate that, I am not saying that you cannot get good results from classical considerations. What I am saying, is that, due to how classical ideas actually arise from quantum fundamentals (namely, that everything classical tends to just be the conflation of modal [as in, most probable] behaviour as the _only_ [or mean, if you are talking about bulk stuff] possibility), the approximation schemes are doomed to complications for little gain. One of which, is the asymmetrical treatment of heat and sound.
That is, even after you derive the heat and sound from the same underlying bulk motion of continuum mechanics, you still have to treat them separately, whereas quantum physics insists that they are _exactly_ the same thing, just different limits of the same _one_ term in any approximation scheme.
It is the same thing with fluids. Very few physicists are dealing with Navier-Stokes equation itself, since it is now the preferred game of applied mathematicians. Instead of asking whether Navier-Stokes equations can have solutions for so-and-so kinds of problems, the physicists working on fluids tend to be working, instead, on the quantum corrections that should be added onto Navier-Stokes equations. After all, chaos sets in earlier than Navier-Stokes equations imply, because, near the critical points, modal behaviour is nowhere near the mean behaviour that we should have been focusing upon all this while. Sadly, this is so difficult that we have yet to do something fundamentally good about it.
In that case, I am not saying that the corresponding classical problems are not important or not good at describing physical systems, but that the quantum world view is very different. And since the fundamental picture clearly needs to be quantum, I merely mean to say that those quantum considerations happen to be even more important than the classical problems.
• Well, I got led on a merry chase and finally found what was causing the memory error. Turns out it was my fault all along…
Rather than “the problem still persists” after the derivation, I would say that the problem ARISES in the derivation of transport equations from the fundamentals. The Boltzmann equation doesn’t have separate terms for heat, sound, bulk motion, viscous stress, etc., because it directly describes the molecular motion those things are emergent properties of. It’s not continuum mechanics either; it’s perfectly capable of describing rarefied gases and even free molecular flow.
Of course quantum mechanics is a much better model than classical physics for condensed matter behaviour, and even some aspects of gas/plasma behaviour. I completely agree with you there. But I maintain that the specific criticism I was responding to, that of classical physics having an inherently fragmented picture of material mechanics, was not accurate, seemingly because of a mismatch in the fundamentality of the descriptions being compared.
• Sorry, I don’t know why, the comment system won’t allow me to reply to yours.
I see. That would be totally my ignorance, then.
However, I would like to point out, to replace the original wrong argument, that the natural ideal gases that we are familiar with, are actually Fermi gases in the high temperature and low density limit. If that were not the case, we would run into what is known as the Gibb’s paradox, in which a classical gas, in the equations, somehow has a lot less pressure than expected. In particular, the ideal gas equation of pV = NkT, would miss out the N which is around 10^24. That makes no sense, until one realises that the quantum indistinguishability (which is basically quantum entanglements, really) needs to be taken into account.
I hope that little bit, which basically states that, even for dilute gases in which we do not expect quantum effects to be important, turn out to be critically dependent upon quantum ideas nonetheless. Of course, the rest of the system does not require quantum corrections, and there is an easy fudge factor to fix that problem, but it does show how quantum theory is still a vital component of everyday life, not some esoteric correction that only people caring about precise effects can observe. (Which is the underlying point I really wanted to outline, although my choice of example turned out to be wrong.)
Thanks. It seems, however, that it may be that the “classical atoms” view that is given by Boltzmann equation thus incorporates enough physics to reproduce the important things I was caring about. Interesting.
• I hate to keep doing this, but… The Gibbs paradox has to do with the definition of entropy. If you don’t assume indistinguishability, you can toggle the entropy up and down by opening and closing a door between two identical reservoirs.
You can get the correct pressure just fine with classical gas kinetics. But there are other things about gas dynamics that require quantum treatment. The temperature dependence of ideal-gas specific heat in multiatomic substances, for instance, is quite substantial and entirely due to the quantization of internal energy storage modes (at lower temperatures, there usually isn’t enough energy in a collision to excite these modes).
• Or something like that – I had to look up Gibbs’ paradox, and I’m not completely sure my facile description above is right…
• Nah, I know the classical gas kinetics can derive the pressure just fine. Why, indeed, I was just teaching my student that elementary derivation.
But it does mean that both Boltzmann and Gibbs entropy cannot be derived from classical reasoning without the indistinguishability fudge factor. You would have to rely on Sackur–Tetrode entropy (removing all the quantum stuff and replacing them with an unknown constant, of course).
It might not seem like a big issue at first glance, but it actually is. Other than the fact that entropy of mixing (that you were describing) had to be discontinuously and manually handled, it does also mean that stuff like phase changes go haywire. Again, that is useless to a fluid dynamicist until you want to deal with, say, ice-water mixtures or critical phenomena.
Or worse, the theory is inconsistent. Judging by how seriously you take the mathematics, it is either screaming at you that you are doing something wrong, or that phenomenology needs to be used (by curve-fitting the unknown constant there, for one).
Instead, what I wanted to impress upon you is that, instead of deriving the pressure from kinetic theory (actually, what a bad name! It is not a theory, nor does kinetic make sense as its modifier. Instead, classical atomic model would be its rightful name), it is possible to subsume the entirety of classical thermodynamics into the 2nd Law. That is, given the existence and some assumed properties of the entropy, you can construct everything you find in classical thermodynamics, even without statistical thermodynamics. That is, 0th Law and 1st Law, in particular, are theorems if you assume the 2nd Law to be your postulate. Actually, it is even a bit less — you assume parts of 2nd Law, and prove the full form of the 2nd Law with the assumptions. The issue I was referring to, is that, if you take this view, in which pressure is just a derivative of the entropy via Maxwell’s relations, and then you try to construct the statistical thermodynamics from it, you will run into Gibbs’ paradox.
At the end, there is no need to worry about you dragging the conversation out. Actually, I was still waiting for some insights from you — you have already shown me wrong once, and there is no reason why you cannot teach me more.
5. I particularly like the statement:
6. Okay, physicist, most of the things in the video are not new to me, but good presentation.
Commented, though, to point out that the “everything is named after Quantum” is an interesting recurring phenomenon in the USA. Perhaps the largest one was the use of “Radio” in naming things. Radio was the internet on steroids, the “tech stock” of the 1920s bubble. One of the most famous meaningless uses of Radio from the time was the little red wagon called a Radio Flyer. The company just put two hot buzz words together, and created a legendary product.
7. Pingback: The Webcomic Guide to Quantum Physics | Slackpile
8. Dear Jorge Cham,
I enjoyed your cute animation. Since you said you were looking for ways to think about quantum mechanics, I thought the resource list below might be interesting. Please feel free to contact me with questions.
David Liao
One of my physics professors from Harvey Mudd College (half-hour east of Caltech) wrote a wonderful book on quantum mechanics for junior physics majors:
John Townsend, A Modern Approach to Quantum Mechanics, University Press: Sausalito, CA (2000) (http://www.amazon.com/A-Modern-Approach-Quantum-Mechanics/dp/1891389785). The academic pedigree of this book comes through Sakurai’s Modern Quantum Mechanics.
Get a hold of Professor David Mermin at Cornell. Tell him you are working with Caltech on this animation series, and ask him to walk you through his slides on Bell’s inequalities and the Einstein-Podolsky-Rosen paradox (http://www.lassp.cornell.edu/mermin/spooky-stanford.pdf).
If you can meet with Sterl Phinney at Caltech, talk to him. He seems to know a lot about a lot, and he’s really fun to be around.
Fundamental concepts:
There is a variety of ways to introduce quantum mechanics. The following two flavors can be provide particularly satisfying insight:
Path-integral formulation — A creative child can tell a bunch of different imaginary stories to explain how a particle got from situation A to another situation B during the course of a day. A mathematician can associate with each story a complex phasor. The phasors can be added (in a vector-like head-to-tail fashion) to obtain an overall complex number for getting from A to B, whose squared magnitude is the overall probability of getting from A to B. The concept of extremized action from classical mechanics (think of light taking the path of least time) is a limiting approximation of the quantum-mechanical path-integral formulation. For this brief description, I skipped a variety of details. This perspective is attributed to Richard Feynman.
State vector, operators — An older, more traditional description of quantum mechanics centers around the state vector (often denoted |psi>). “All that can be described” about an entity of interest is hypothetically abstracted as a vector from a vector space of all possible descriptions that can be associated with the entity. It is hypothesized that the outcomes of measurements correspond to [real] eigenvalues of [Hermitian] operators that can act on the state vector, and that when it is appropriate to describe an entity using one single eigenstate of an operator, this means that observation corresponding that operator will without doubt yield the corresponding eigenvalue as the measured result.
Note: State |psi> is *not* wavefunction psi(x). psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.
Risky vocabulary:
It is important to be aware of verbal shortcuts that are used to make quantum seem more conceptually accessible in the short term that, unfortunately, also make quantum much more difficult to understand fundamentally in the long term:
There is no motion in any energy eigenstate (ground state or otherwise). Words such as “vibration” and “zooming around” are only euphemistically associated with any *individual* energy eigenstate. As an example, the Born-Oppenheimer approximation for solving the time-independent Schrödinger equation by separating the electronic and nuclear degrees of freedom is often justified using a story that involves the phrase “the light electrons are whizzing around as the nuclei faster than the massive nuclei are slowly vibrating around their equilibrium positions.” This is shorthand for saying that the curvature term associated with the nuclear coordinates is ignored as the first term in a perturbative expansion because it is suppressed by the ratio of the nuclear mass, M, to the electron mass, m, (for details, http://www.math.vt.edu/people/hagedorn/).
Even though the Heisenberg relationship is often described using phrases such as “not knowing how we disturbed a particle by looking at it,” a more fundamentally satisfying understanding is obtained by seeing that some operators don’t commute. Because some pairings of operators, such as position and momentum, don’t share eigenvectors, it is impossible for an entity to simultaneously be in an eigenvector for one operator, say, x position, while also being in an eigenvector for the other operator, in this example, x momentum. Having the momentum well defined (being in an eigenvector for momentum) corresponds to being unable to associate one particularly narrow range of position eigenvalues with the entity. This is essentially the Fourier cartoon you used in the animation (narrowness in space corresponds to less specificity in frequency/wavelength and vice versa).
Beware of popular reports of the experimental observation of a wavefunction. Pull up the abstract from the underlying peer-reviewed manuscripts. I bet that the wavefunction has not been directly observed. Instead, the squared-magnitude (probability distribution) has been inferred from a large collection of individual experiments. As an example, a recent work inferring the nodal structure (radii where probability of finding electron around an atomic core vanishes) became popularized as direct observation of the wavefunction, which is not the claim in the original authors’ abstract.
• Hi David,
By and large, a good write-up. But, still…
1. A minor point:
Did you miss something on the right-hand side of the equality sign? In any case, guess you could streamline the line a bit here.
2. A major point:
“There is no motion in any energy eigenstate.”
And, just, how do you know?
[And, oh, BTW, you could expand this question to include any other eigenstates as well.]
Anyway, nice to see your level of enthusiasm and interest for these *conceptual* matters as well. Coming from a physics PhD, it is only to be appreciated.
• Thank you for your reply. Hope the following is helpful!
1) Thank you for catching the typo in the sentence, “psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” This sentence should, instead, read, “psi(x) is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” I don’t know how to edit my post to correct this sentence.
2) You asked how it is possible to know that there is no motion in an energy eigenstate. Below, I include two ways to respond. The abstruse response is an actual answer and points to the insight you are seeking. If you look closely, you will see that the graphical response is not an actual answer. Instead, it is a fun exercise for “feeling the intuition” that energy eigenstates do not have motion. Both responses are important (many physicists enjoy both casual “proofs” and fluffy intuition).
Abstruse response:
We argue that an object that is completely described by one energy eigenstate has no motion. An energy eigenstate is a solution to the time-INDEpendent Schrödinger equation. It’s very “boring.” The only thing that happens to it, according to the time-DEpendent Schrödinger equation is, a rotation of its overall complex phase. This phase does not appear in expectation values, and so all expectation values are constants with time. To obtain motion, it is necessary to have a superposition of more than one state corresponding to at least more than one energy eigenvalue. In such circumstances, at least some of the complex phases will rotate at different time frequencies, allowing *relative* phases between states in the superposition to change with time.
I am not claiming that experimental systems that people abstract using energy eigenstates will never turn out, following additional research, to have any aspect of motion. I am saying that the *abstraction* of a single energy eigenstate itself (without reference to whether the abstraction corresponds to anything empirically familiar) is a conceptual structure that contains no concept of motion (save for the rotating overall phase factor).
The mathematics described above are very similar to the mathematics that describe the propagation of waves in elastic media. A pure frequency standing wave always has the same shape (though it might be vertically scaled and upside down). A combination of standing waves of different frequencies does not always maintain the same shape.
Graphical response:
Go to http://crisco.seas.harvard.edu/projects/quantum.html and play with the simulator. Now set the applet to use a Harmonic potential, and try to sketch, using the “Function editor,” the ground state from http://en.wikipedia.org/wiki/File:HarmOsziFunktionen.png
You might want to turn on the display of the Potential energy function to ensure an accurate width for the state you are sketching. Run the simulation. Notice that the function doesn’t move very much (or in the case that you sketched the ground state with perfect accuracy, it shouldn’t move at all). Now, sketch a different state that doesn’t look like any one of the energy eigenstates in the Wikipedia image above. This should generate motion (to some extent looking like a probability mound bouncing back and forth in the well).
You can also look at the animations at http://en.wikipedia.org/wiki/Quantum_harmonic_oscillator and see that the energy eigenstate examples (panels C,D,E, and F) merely rotate in complex space (red and blue get exchanged with each other), but the overall spatial probability distribution is unchanged.
3) You asked whether one would assert absence of motion for other eigenstates.
Not as a general blanket statement. The reason that energy eigenstates have no motion is that they are eigenstates, specifically, of the Hamiltonian. Yes, in some examples, it is possible for an eigenstate of another operator to have no motion (i.e. when that state is both an eigenstate of another operator, as well as of the Hamiltonian).
• Cool.
Your abstruse response really wasn’t so abstruse. But anyway, my point concering the quantum eigenstates was somewhat like this.
To continue on the same classical mechanics example as you took, consider, for instance, a plucked guitar string. The pure frequency standing wave is “standing” only in a secondary sense—in the sense that the peaks are not moving along the length of the string. Yet, physically, the elements of the string *are* experiencing motion, and thus the string *is* in motion, whether you choose to view it as an up-and-down motion, or, applying a bit of mathematics, view it as a superposition of “leftward” and “rightward” moving waves.
The issue with the eigenstates in QM is more complicated, only because of the Copenhagen/every other orthodoxy in the mainstream QM. The mainstream QM in principle looks down on the idea of any hidden variables—including those local hidden variables which still might be capable of violating the Bell inequalities. They are against the latter idea, in principle—even if the hidden variables aren’t meant to be “classical.” Leave aside a few foundations-related journal, the mainstream QM community, on the whole, refuses to seriously entertain any idea of any kind of a hidden variable—and that’s exactly the way in which the relativists threw the aether out of physics. … I was not only curious to see what your inclinations with respect to this issue are, but also to learn the specific points with which the mainstream QM community comes to view this particular manifestation of the underlying issue. In particular, do they (even if epistemologically only wrongly) cite any principle as they proceed to wipe out every form of motion out of the eigenstates, or is it just a dogma. (I do think that it is just a dogma.)
Anyway, thanks for your detailed and neatly explanatory replies. … Allow me to come back to you also later in future, by personal email, infrequently, just to check out with you how you might present some complicated ideas esp. from QM. (It’s a personal project of mine to understand the mainstream QM really well, and to more fully develop a new approach for explaining the quantum phenomena.)
• Ah, I see better where you are coming from.
You are wondering what explanations someone might give for focusing on mainstream QM interpretations and de-emphasizing hidden variables perspectives. Off the top of my head, I can imagine what people might generally say. I can also rattle off a couple thoughts as to why my attention does not wander much into the world of hidden variables.
Anticipated general responses
(0) I imagine usual responses would refer to Occam’s Razor and/or the Church of the Flying Spaghetti Monster. People might say that Occam’s Razor (or something along the same lines) is a fundamental aesthetic aspect of the Western idea of “science.” I am not saying these references directly address the most logically reasoned versions of the concerns you might be raising.
(0.1) I think some professional scientists are laid back conceptual cleanliness. It doesn’t bother them enough to “beat” the idea of motion in eigenstates out from students in QM. I know a couple professional scientists who are OK with letting students think that electrons are whizzing around molecules.
Personal thoughts
(1) I don’t necessarily “believe” mainstream QM in a religious sense, but it feels natural (for my psychology). My gut feelings of certainty about existence of things somewhat vanish unless I am directly looking at them, touching them, and concentrating with my mind to force them “into existence” through brutal attention. People like to sensationalize mainstream QM by saying that it has counterintuitive indeterminacy. At the end of the day, what offends one person’s intuitions can be instinctively natural for someone else. I hear that mainstream QM is also “OK” for people who hold Eastern belief systems (I’m atheistish, so I don’t personally know).
(2) Mainstream QM has a particular pedagogical value. It offers an exercise in making reasoned deductions while resisting the urge to rely on (some) inborn intellectual instincts. I think it’s good for learning that we sometimes confuse [1] the subjective experience of *projecting* a well-defined, deterministic mental image of the dynamics of a system onto a mental blank stage representing reality with, instead, [2] the supposed process of directly perceiving and “being unified with” reality. Yes, philosophy courses can be valuable too, but in physics you can also learn to calculate the photospectra* of atoms and describe the properties of semiconductors and electronic consumer goods.
* Surprisingly difficult to do in a fully QM treatment at the undergraduate level. Perturbing the atom with a classical oscillating electric field is *not* kosher. It’s much more satisfying to quantize the EM field.
Does any of this mean that mainstream QM is true? No. No scientific theory is ever “true” (quotation marks refer to mock hippee existential gesture).
David Liao
P.S. I am happy to share my email address with you–how do I do that? Does this commenting platform share my address (sorry, not used to this system)?
• Hi David,
1. Re. Hidden variables.
Philosophically, I believe in “hidden variables” to the same extent (i.e. to the 100% extent) and for the same basic reason that I believe that a trrain continues to exist after it enters a tunnel and before it emerges out of the same. Lady Diana *could* suffer an accident inside a tunnel, you know… (I mean, she would have continued to exist even after entering that tunnel—whether observed by those paparazzis or not. That is, per my philosophical beliefs…)
Physics-wise, I (mostly) care for only those hidden variables which appear in *my* (fledgling) approach to QM (which I have still to develop to the extent that I could publish some additional papers). I mostly don’t care for hidden variables of any other specifically physics kinds. Mostly. Out of the limitations of time at hand.
2. Oh yes, (IMO) electrons do actually whiz around. Each of them theoretically can do so anywhere in the universe, but practically speaking, each whizzes mostly around its “home” nucleus.
3. About mysticism: Check out J.M. Marin (DOI: 10.1088/0143-0807/30/4/014). Mysticism was alive and kicking in the *Western* culture even at a time that Fritjof Capra was not even born. The East could probably lay claim to the earliest and also a very highly mature development of mysticism, but then, I (honestly) am not sure to whom should go the credit for its fullest possible development: to the ancient mystics of India, or to Immanuel Kant in the West. I am inclined to believe that at least in terms of rigour, Kant definitely beat the Eastern mystics. And that, therefore, his might be taken as the fullest possible development. Accordingly, between the two, I am inclined to despise Kant even more.
4. About my email ID. This should be human readable (no dollars, brackets, braces, spaces, etc.): a j 1 7 5 t p $AT$ ya h oo [DOT} co >DOT< in . Thanks.
• Entering this comment for the third time now (and so removing bquote tags)–ARJ
Hi David,
1. A minor point:
2. A major point:
>> “There is no motion in any energy eigenstate.”
And, just how do you know?
9. Great idea for doing this. Just a hint for getting more non physicists involved: talk at least half as fast as you do, people need time to absorbe and self explain, othewise no matter how simple it is, they lose you at the beginning.
10. Pingback: Quantum Matter Animated! | Astronomy physics an...
11. Pingback: Quantum Frontiers and Tuba! | Creative Science
• Mankei,
Interesting. You seem to be having been fun thinking about this field for quite some time.
Anyway, here is a couple of questions for you (and for others from this field):
(i) Is it possible to make a mechanical oscillator/beam detectably interact with single photons at a time (i.e. statistically very high chance of only one photon at a time in the system)? [For instance, an oscillator consisting of the tip of a small triangle protruding out of a single layers of atoms as in a graphene sheet? … I am just guessing wildly for a possible and suitable oscillator here.] Note, for single photons, it won’t be an _oscillator_ in the usual sense of the term. However, any mechanical device that mechanically responds (i.e. bends), would be enough.]
(ii) If such a mechanical device (say an oscillator) is taken “to” “0” K, does/would/will it continue to show the red/blue asymmetrical behavior? [Esp. for Mankei] What do you expect?
• (i) In theory it’s possible, there have been a few recent theoretical papers on “single-photon optomechanics” that explore what would happen, but experimentally it’s probably very, very hard. Current experiments of this sort use laser beams with ~1e15 photons per second.
(i) I have no idea what would happen then, because my math and my intuition always assume the laser beam to be very strong. Other people might be able to answer you better.
• Hi Mankei,
1. Thanks for supplying what obviously is a very efficient search string. (The ones I tried weren’t even half as efficient!) … Very interesting results!
2. Other people: Assuming that the gradual emergence of the red-blue asymmetry with the decreasing temperatures (near the absolute zero) continues to be shown even as the *light flux* is reduced down to (say) the single-photon levels, then, how might Mankei’s current model/maths be reconciled with that (as of now hypothetical) observation?
I thought of the single-photon version essentially only in order to remove the idea of “noise” entirely out of the picture.
If there is no possibility of any noise at all, and *if* the asymmetry is still observed, wouldn’t it form a sufficient evidence to demonstrate the large-scale *quantum* nature of the mechanical oscillator (including the possibilities of a transfer of a quantum state to a large-scale device)? Or would there still remain some source of a doubt?
• Hi Mankei,
We also thought about the issue you brought up in arxiv:1306.2699. See, for instance, a recent paper we published with Yanbei Chen and Farid Khalili (http://pra.aps.org/abstract/PRA/v86/i3/e033840).
I would consider that our experiment measured both the sum AND difference of the red and blue sideband powers. The DIFFERENCE is indeed, as shown in your arxiv post mentioned above, due to the quantum noise of the light field measuring the mechanics. The noise power of the mechanics is in the SUM of the red and blue sidebands. Our experimental data was plotted as the ratio of the red and blue sidebands, which depends upon both the sum and difference of the sidebands powers, and looks very much different than what would be expected even for a semi-classical picture in which the light is quantized and the motion not.
• I guess we’ve already exchanged emails and come to a consensus, but just to recap, I agree that, through your calibrations, you’ve inferred zero-point mechanical motion and your result is consistent with quantum theory. The word “quantum” of course literally means something discrete and one could argue you haven’t observed “quantum” motion yet, but that’d be nitpicking.
• And to clarify, the asymmetry itself is not proof of zero-point mechanical motion or anything quantum. The mechanical energy was obtained from the SUM of the sidebands (as Oskar said), and the asymmetry was used as a *calibration* to compare the mechanical energy with the optical vacuum noise.
• So me and my boyfriend are going on a two hour car ride together in a few wes03Rek; I know that’s not a a long time, but I still feel like we’ll need some conversation material on the ride. Last time we took the ride, there were a few awkward silences and I just want to make sure that for most of it, we have something to chat about. Are there any car games that you guys know of that force people to talk?.
• Hi Mankei,
Thanks for your response. There are two main claims in your manuscript, 1) centers around the interpretation of our result, 2) is a strong claim about classical stochastic processes being the source of our observed asymmetry.
In response to 1), the different interpretations of the result (and in particular, the relation between the optical vacuum noise and the zero-point motion) have been considered previously in great depth by our colleagues at IQIM (Haixing Miao and Yanbei Chen) and in Russia (Farid Khalili). I would like to point you to this paper: http://pra.aps.org/abstract/PRA/v86/i3/e033840.
In response to 2), you claim to “show that a classical stochastic model, without any reference to quantum mechanics, can also reproduce this asymmetry”. We also consider this possibility in a follow-up paper which came out last year (http://arxiv.org/abs/1210.2671), where we show a derivation exactly analogous to what you’ve shown, and then go to great lengths to experimentally rule out classical noise as the source of asymmetry (by varying the probe power and showing that the asymmetry doesn’t change, and by carefully characterizing the properties of our lasers).
More generally, there are fundamental limits as to what can be claimed regarding `quantum-ness’ in any measurement involving only measurements of Gaussian noise. To date there have been 5 measurements of quantum effects in the field of optomechanics, our paper being the first one (the others are Brahms PRL 2012, Brooks Nature 2012, Purdy Science 2013, and Safavi-Naeini Nature 2013 (in press)). Unfortunately, all of these measurements are based on continuous measurement of Gaussian noise. There are several groups working hard on observing stronger quantum effects (as O’Connell Nature 2010 did in an circuit QED system), but we are still some months away from that.
Best, Amir
• Actually, I’d like to make that 6 papers – last week Cindy Regal’s group released this beautiful paper on arXiv: http://arxiv.org/abs/1306.1268.
Here as well, the `quantum-ness’ can only be inferred after careful calibration of the classical noise in the system, since the measurement is based on continuous measurement of Gaussian noise.
• Actually I’d like to make that 7 papers – I forgot about the result from 2008 from Dan Stamper-Kurn’s group: Murch, et al. Nature Physics, 4, 561 (2008).
• hi sonni,thaaks for sharing this recipe..i have never imagine i could make this dish one day until just now i decided to give it a try..oh my god, it taste fabulous!! going to cook this again and again..
12. Pingback: Quantum Matter Animated! | Space & Time | S...
13. Pingback: Quantum Matter Animated! | Far Out News | Scoop.it
14. Pingback: Quantum Theory and Buddhism | Talesfromthelou's Blog
15. I get very annoyed whenever somebody uses the phrases “quantum jump” or “quantum leap” to imply a BIG change in some domain (such as “our new Thangomizer represents a quantum jump in Yoyodyne’s capabilities). A quantum jump is the SMALLEST POSSIBLE state change in quantum mechanics, so when somebody claims their product represents a “quantum leap,” I mentally translate that as “smallest possible degree of incremental improvement over their previous product!”
16. Pingback: My comments at other blogs—part 1 | Ajit Jadhav's Weblog
17. Is it that higher red shift and lower blue shift indicates constant shrinking of the mirror? If that is true then do we expect red shift to die down say we keep the mirror at 0K for long enough time?
18. Pingback: Squeezing light using mechanical motion | Quantum Frontiers
19. Pingback: The Most Awesome Animation About Quantum Computers You Will Ever See | Quantum Frontiers
20. Pingback: Hacking nature: loopholes in the laws of physics | Quantum Frontiers
21. Pingback: Human consciousness is simply a state of matter, like a solid or liquid – but quantum | Tucson Pool Saz: Tech - Gaming - News
22. Pingback: This Video Of Scientists Splitting An Electron Will Shock You | Quantum Frontiers
23. No. No, this shall not stand. Have you no heart, sir? You have a family now, as do I. You simply cannot go around throwing the videogaming equivalent of heneri-lacod crack at folks.Shame! SHAME!
Leave a Reply to spiros Cancel reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
0d37b9d3f4c25c0a | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Keith Lehrer
Gottfried Leibniz
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Gregory Bateson
John S. Bell
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Charles Darwin
Terrence Deacon
Lüder Deecke
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
Brian Goodwin
Joshua Greene
Jacques Hadamard
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Juan Roederer
Jerome Rothstein
David Ruelle
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Fritz Zwicky
Free Will
Mental Causation
James Symposium
Wave-Particle Duality
Can something possibly be, at one and the same time, both a discrete particle (Werner Heisenberg) and a continuous wave (Erwin Schrödinger)? The information interpretation of quantum mechanics says unequivocally No. For the quantum physicist, it is always either a wave or a particle.
The evolution of a quantum system, an electron or a photon, for example, goes in two stages.
The first is a wave stage in which the wave function explores all the possibilities available, given the configuration of surrounding particles, especially those nearby, which represent the boundary conditions for the Schrödinger equation of motion for the wave function. Because the space where the possibilities are non-zero is large, we say that the wave function (or "possibilities function") is nonlocal.
An observer can not gain any empirical knowledge unless new information has first been irreversibly recorded, e.g., a particle has been localized in the experimental apparatus
The second stage is when the photon or electron interacts with one or more of the surrounding particles. One of the nonlocal possibilities may be "actualized" or localized.
Information about the new interaction may be recorded. If the new information is irreversibly recorded, it may later be observed.
When you hear or read that electrons are both waves and particles, think "either-or" - first a wave of possibilities, then an actual particle.
That a light wave might actually be composed of quanta (later called photons) was first proposed by Albert Einstein as his "light-quantum hypothesis."
He wrote in 1905:
On the modern quantum view, what spreads out is a "nonlocal" wave of probability amplitude,
the possibilities for absorption, followed by a whole photon actually being absorbed ("localized") somewhere.
In 1909, Einstein speculated about the connection between wave and particle views:
This is wave-particle duality fourteen years before Louis deBroglie's matter waves and Erwin Schrödinger's wave equation and wave mechanics
When light was shown to exhibit interference and diffraction, it seemed almost certain that light should be considered a wave...A large body of facts shows undeniably that light has certain fundamental properties that are better explained by Newton's emission theory of light than by the oscillation theory. For this reason, I believe that the next phase in the development of theoretical physics will bring us a theory of light that can be considered a fusion of the oscillation and emission theories...
Even without delving deeply into theory, one notices that our theory of light cannot explain certain fundamental properties of phenomena associated with light. Why does the color of light, and not its intensity, determine whether a certain photochemical reaction occurs? Why is light of short wavelength generally more effective chemically than light of longer wavelength? Why is the speed of photoelectrically produced cathode rays independent of the light's intensity? Why are higher temperatures (and, thus, higher molecular energies) required to add a short-wavelength component to the radiation emitted by an object?
The fundamental property of the oscillation theory that engenders these difficulties seems to me the following. In the kinetic theory of molecules, for every process in which only a few elementary particles participate (e.g., molecular collisions), the inverse process also exists. But that is not the case for the elementary processes of radiation.
Einstein's view since 1905 was that light quanta are emitted in particular directions. There are no outgoing spherical waves (except probability amplitude or "possibilities" waves). Even less likely are incoming spherical waves, never seen in nature
Dueling Wave and Particle Theories
Not only do we have the problem of understanding wave-particle duality in a quantum system, we have a full-blown wave mechanical theory (deBroglie and Schrödinger) versus a particle mechanics theory (Heisenberg, Max Born, Pascual Jordan).
Before either of these theories was developed in the mid-1920's, Einstein in 1916 showed how both wave-like and particle-like behaviors are seen in light quanta, and that the emission of light is done at random times and in random directions. This was the introduction of ontological chance (Zufall) into physics, over a decade before Heisenberg announced that quantum mechanics is acausal in his "uncertainty principle" paper of 1927.
As late as 1917, Einstein felt very much alone in believing the reality (his emphasis) of light quanta:
I do not doubt anymore the reality of radiation quanta, although I still stand quite alone in this conviction
Einstein in 1916 had just derived his A and B coefficients describing the absorption, spontaneous emission, and (his newly predicted) stimulated emission of radiation. In two papers, "Emission and Absorption of Radiation in Quantum Theory," and "On the Quantum Theory of Radiation," he derived the Planck law (for Planck it was mostly a guess at the formula), he derived Planck's postulate E = , and he derived Bohr's second postulate
Em - En = . Einstein did this by exploiting the obvious relationship between the Maxwell-Boltzmann distribution of gas particle velocities and the distribution of radiation in Planck's law.
Einstein wrote:
The formal similarity between the chromatic distribution curve for thermal radiation and the Maxwell velocity-distribution law is too striking to have remained hidden for long. In fact, it was this similarity which led W. Wien, some time ago, to an extension of the radiation formula in his important theoretical paper, in which he derived his displacement law...Not long ago I discovered a derivation of Planck's formula which was closely related to Wien's original argument and which was based on the fundamental assumption of quantum theory. This derivation displays the relationship between Maxwell's curve and the chromatic distribution curve and deserves attention not only because of its simplicity, but especially because it seems to throw some light on the mechanism of emission and absorption of radiation by matter, a process which is still obscure to us.
But the introduction of Maxwell-Boltzmann statistical mechanical thinking to electromagnetic theory has produced what Einstein called a "weakness in the theory." It introduces the reality of an irreducible objective chance!
If light quanta are particles with energy E = hν traveling at the velocity of light c, then they should have a momentum p = E/c = hν/c. When light is absorbed by material particles, this momentum will clearly be transferred to the particle. But when light is emitted by an atom or molecule, a problem appears.
The "statistical interpretation" of Max Born tells us the outgoing wave is the probability amplitude wave function Ψ, whose absolute square is the probability of finding a light particle in an arbitrary direction.
Conservation of momentum requires that the momentum of the emitted particle will cause an atom to recoil with momentum hν/c in the opposite direction. However, the standard theory of spontaneous emission of radiation is that it produces a spherical wave going out in all directions. A spherically symmetric wave has no preferred direction. In which direction does the atom recoil? Einstein asked:
Does the molecule receive an impulse when it absorbs or emits the energy ε? For example, let us look at emission from the point of view of classical electrodynamics. When a body emits the radiation ε it suffers a recoil (momentum) ε/c if the entire amount of radiation energy is emitted in the same direction. If, however, the emission is a spatially symmetric process, e.g., a spherical wave, no recoil at all occurs. This alternative also plays a role in the quantum theory of radiation. When a molecule absorbs or emits the energy ε in the form of radiation during the transition between quantum theoretically possible states, then this elementary process can be viewed either as a completely or partially directed one in space, or also as a symmetrical (nondirected) one. It turns out that we arrive at a theory that is free of contradictions, only if we interpret those elementary processes as completely directed processes.
An outgoing light particle must impart momentum hν/c to the atom or molecule, but the direction of the momentum can not be predicted! Neither can the theory predict the time when the light quantum will be emitted.
Such a random time was not unknown to physics. When Ernest Rutherford derived the law for radioactive decay of unstable atomic nuclei in 1900, he could only give the probability of decay time. Einstein saw the connection with radiation emission:
It speaks in favor of the theory that the statistical law assumed for [spontaneous] emission is nothing but the Rutherford law of radioactive decay.
But the inability to predict both the time and direction of light particle emissions, said Einstein in 1917, is "a weakness in the theory..., that it leaves time and direction of elementary processes to chance (Zufall, ibid.)." It is only a weakness for Einstein, of course, because his God does not play dice.
Einstein clearly saw, as none of his contemporaries did, that since spontaneous emission is a statistical process, it cannot possibly be described with classical physics.
The properties of elementary processes required...make it seem almost inevitable to formulate a truly quantized theory of radiation.
How Einstein Discovered Wave-Particle Duality
Einstein was bothered by Planck's discovery of the blackbody radiation law. He said that it "rests on a seemingly monstrous assumption."
Planck had assumed that energy levels were discrete (compare Bohr's stationery states in the old quantum theory). Einstein saw that transitions between those levels should be discrete quanta. When Bohr formulated his atom theory (and for the next dozen years), he ignored Einstein's light quanta
To arrive at a certain answer to this question, let us proceed in the opposite direction of Planck in his radiation theory. Let us view Planck's radiation formula as correct, and ask ourselves whether something concerning the composition of radiation can be derived from it.
Eight years later, in his paper on the A and B coefficients (transition probabilities) for the emission and absorption of radiation, Einstein carried through his attempt to understand the Planck law. He confirmed that light behaves sometimes like waves (notably when a great number of particles are present and for low energies), at other times like the particles of a gas (for few particles and high energies).
Dirac on Wave-Particle Duality
Quantum mechanics is able to effect a reconciliation of the wave and corpuscular properties of light. The essential point is the association of each of the translational states of a photon with one of the wave functions of ordinary wave optics. The nature of this association cannot be pictured on a basis of classical mechanics, but is something entirely new. It would be quite wrong to picture the photon and its associated wave as interacting in the way in which particles and waves can interact in classical mechanics. The association can be interpreted only statistically, the wave function giving us information about the probability of our finding the photon in any particular place when we make an observation of where it is.
Note that the information about the possibility of a photon at a given point does not have to be "knowledge" for some conscious observer. It is statistical information about the photon, even if it is never observed
Some time before the discovery of quantum mechanics people realized that the connexion between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place.
Einstein, deBroglie, and Schrödinger had all argued that the light wave at some point might be the probable number of photons at that point.
For Teachers
For Scholars
Chapter 1.1 - Creation Chapter 1.3 - Information
Home Part Two - Knowledge
Normal | Teacher | Scholar |
45eda5d817593095 | My research lies at the interface between many body quantum physics, quantum information theory and statistical physics.
In recent years, the progress in quantum engineering has provided new tools for simulating and studying the quantum dynamics of truly isolated quantum systems. These systems, made of trapped ions or cold atoms in optical lattices, can be prepared in a global pure state and their level of isolation from uncontrolled degrees of freedom is such that they can evolve unitarily according to the Schrödinger equation. Surprisingly, despite these systems are at all times in a pure quantum state, they can display some signatures of thermalization. More strikingly, these systems can sometimes display local equilibrium states in strong disagreement with the standard statistical physics predictions. These experimental facts are clearly questioning what kind of statistical description is relevant for isolated many body quantum systems.
The aim of my current project is to investigate theoretically and numerically new methods for characterizing the out of equilibrium dynamics, the stationary properties and the quantum information propagation in large interacting quantum systems. In other words, we are interested generally in the many body quantum problem. The method we propose involves the introduction of a controlled amount of randomness in the modeling of the physical system considered, in particular in the interaction between its subparts. We think this framework could provide statistical solutions to the many body problem.
Postdoctoral and PhD positions funded by the Leverhulme Trust are available.
Details: PhD, Postdoc. Please contact |
492ae39492fd5858 | Introduction to Quantum Mechanics
kirjoittanut Jacob Linder
( 14 )
80 pages
This book covers the basic principles of quantum mechanics along with key introductory topics.
Tämä on ilmainen eKirja opiskelijoille
Rekisteröidy ja käytä ilmaiseksi
Kaikki oppikirjat ilmaiseksi, aina. Alle 15 % mainoksia
Ilmainen 30 päivän kokeilujakso
Viimeisin lisäys
1. A brief historical note on the origin of quantum mechanics
1. The insuffiency of classical physics
2. Fundamental principles and theorems in quantum mechanics
1. Describing particles as waves
2. The postulates of quantum mechanics
3. Eigenvalues and eigenfunctions
4. Expansion via eigenfunctions
5. Probability current and density
6. Simultaneous eigenfunctions
7. Time-evolution of expectation values
8. The Ehrenfest theorem
9. Heisenberg’s uncertainty principle
3. Solving the Schrödinger equation: bound states and scattering
1. Stationary states
2. Time-energy uncertainty: what it really means
3. Collapse of the wavefunction and superpositions
4. Wavefunction properties
5. Particle in a potential well
6. The δ-function potential
4. Quantum harmonic oscillator and scattering
1. Harmonic oscillator
2. Quantum mechanical scattering
5. Quantum mechanics beyond 1D
1. Particle in a box
2. Harmonic oscillator
3. 2D potentials with polar coordinates
6. Quantization of spin and other angular momenta
1. Orbital angular momentum
2. Central potentials and application to the Coulomb potential
3. Generalized angular momentum operators
4. Quantum spin
7. Quantum statistics and exchange forces
1. Symmetry of the wavefunction
2. The Pauli exclusion principle and its range
3. Exchange forces due to the Pauli principle
8. Periodic potentials and application to solids
1. Bloch functions
2. Band structure and the Kronig-Penney model
The aim of this book is to provide the reader with an introduction to quantum mechanics, a physical theory which serves as the foundation for some of the most central areas of physics ranging from condensed matter physics to astrophysics. The basic principles of quantum mechanics are explained along with important belonging theorems. We then proceed to discuss arguably the most central equation in quantum mechanics in detail, namely the Schrödinger equation, and how this may be solved and physically interpreted for various systems. A quantum treatment of particle scattering and the harmonic oscillator model is presented. The book covers how to deal with quantum mechanics in 3D systems and explains how quantum statistics and the Pauli principle give rise to exchange forces. Exchange forces have dramatic consequences experimentally and lie at the heart of phenomena such as ferromagnetism in materials. Finally, we apply quantum mechanics to the treatment of angular momentum operators, such as the electron spin, and also discuss how it may be applied to describe energy bands in solids.
About the author
|
229a62d0890206b5 | I know outer electrons include (n-1)d...+ all n'ss.. But i don't understand, never been told what unpaired electron is supposed to be? is it the same thing.
so in the following electron configuration:
$4s^2\: 3d^{10}\: 4p^3$:
OuterElectrons: $15$
ValenceElectrons: $5$
from what I remember someone told me unpaired electrons are $15$ too for this...which is the same as outer electrons so I guess it's the same thing. But I am not sure so please help.
Outer electrons and unpaired electrons are not the same. Outer electrons are, as you said, $(n-1)d$ and all $n$'s. But unpaired electrons are different. To understand what they are, you must know what is electron spin (which, I hope you do). If you don't, read on.
Every electron in an atom has a quantum spin state (either clockwise or anticlockwise, nothing else) denoted by $+\frac{1}{2}$ and $-\frac{1}{2}$. These spin quantum numbers have no classical analogue, i.e., you can't relate them to any phenomenon in real life situations that you can visually perceive.
So, you see, every electron is paired with another electron with opposite spin. Together, the two of them fill one orbital, as for example, $2p_x$, $3d_{xy}$ etc. These orbitals are found out by solving the Schrodinger Equation.
Now, whenever an electron in an orbital does not have another electron with opposite spin, it is called an unpaired electron.
The electronic configuration that is given is $4s^2\: 3d^{10}\: 4p^3$. As, you can see, the $4p$ is filled with $3$ electrons whereas its maximum capacity is $6$. $4p$ has $3$ orbitals and so, each orbital is filled with one electron each, as given (according to Hund's Rule). So, these three are unpaired electrons. All other shells are fully filled and so, there are no more unpaired electrons.
So, number of unpaired electrons is 3 not 15.
• $\begingroup$ how do you know 4p has three orbitals $\endgroup$ – Muhammad Umer Dec 12 '13 at 8:16
• $\begingroup$ well, I just know. If you aren't convinced, you should know that s has 1 orbital, p has 3, d has 5, f has 7 etc. Learn atomic structure and you will know. $\endgroup$ – Ris97 Dec 12 '13 at 15:38
• $\begingroup$ 1,3,5,7...hmm..nature sometimes make me wonder things...:D $\endgroup$ – Muhammad Umer Dec 12 '13 at 17:00
• 1
$\begingroup$ @MuhammadUmer To be more precise, a subshell with secondary quantum number $l$ is composed of $2l+1$ orbitals. These conditions arise while solving the angular part of the Schrödinger equation for a hydrogen-like atom. $\endgroup$ – Nicolau Saker Neto Dec 12 '13 at 21:19
Your Answer
|
6d4e369de157f659 | I am new to the field of computational physics and have a couple of questions regarding solving the non-linear Schrödinger equation using Operator splitting.
1) If the hamiltonian is of the form $H=\frac{\partial^{2}}{\partial x^{2}}+\gamma|\psi|^{2}$ then the standard procedure I understand is to exponentiate $-i(\gamma |\psi|^{2})\Delta t/\hbar$ and operate it on the initial value of $\psi$, then take a fourier transform to convert it to momentum space and operate it with exponential of $-ip^{2}\Delta t/\hbar$ and convert the resultant back to position space. We repeat this for each time interval $\Delta t$. Instead, why can't we do everything in the momentum space to begin with? Why this back and forth shifting from position to momentum space?
2) Suppose now I have an additional term of $\frac{\partial^{2}}{\partial x^{2}}|\psi|^{2}$ in the Hamiltonian, then how do I accomodate this term in the scheme of split operator method?
• $\begingroup$ Are you sure it's not $\gamma|\psi|^2$, which is real, unlike $\gamma\psi|\psi|^2$? You're describing a particular operator splitting method, there are others. It might help to describe it explicitly using formulas and linear algebra, instead of just words. In particular, you have to see that the Fourier transform diagonalizes $\partial/\partial x$—something I find impossible to describe in words. $\endgroup$ – Kirill Feb 1 '17 at 21:55
• $\begingroup$ @Kirill My bad. Edited the mistake. Also, I see that the fourier transform diagonalizes ∂/∂x, but my question is not that. What I am asking is that why can't we work only in the momentum space where the partial derivative operator, as you mentioned, can be written as p, i.e. ∂/∂x→p? I am talking about the split-step method in the following wikipedia page, to be specific: en.wikipedia.org/wiki/Split-step_method $\endgroup$ – Abhijit Feb 2 '17 at 6:48
• $\begingroup$ Because then $\gamma|\psi|^2$ isn't diagonal? The whole point of solving $x'=Ax$ with $e^{At}x_0$ is that it's really easy when $A$ is diagonal, and hard otherwise. It's not really clear to me what you're asking. $\endgroup$ – Kirill Feb 2 '17 at 18:14
• $\begingroup$ @Kirill I guess I can put forth my difficulty more clearly when you address point 2 of my question. To re-state it, if I put an additional term of $\partial^2|\psi|^2/\partial x^2$ , then how do you evolve the system using the split step method? $\endgroup$ – Abhijit Feb 3 '17 at 17:29
• $\begingroup$ Can you write out the exact expression for the whole PDE? That term looks so odd in the context of NLSE, that it would really help to be explicit. $\endgroup$ – Kirill Feb 3 '17 at 17:36
The Strang splitting method goes like this. You start with the PDE $$ u_t = (L+B)u, \qquad L = \partial_x^2, $$ and you notice that when $L$ and $B$ are independent of $x$, the exact solution after time $\delta t$ to this is $$ u = e^{(L+B)\delta t}u_0 \approx e^{\frac12 L\delta t}e^{B\delta t}e^{\frac12 L\delta t}u_0. $$ Because $B$ is just a function of $x$, the operator $e^{B\delta t}$ just multiplies by $e^{B(x)\delta t}$. Because $L = F \hat L F^{-1}$ is diagonalizable by the Fourier transform (assuming the right boundary conditions), $$ e^{L\delta t} = F e^{\hat L \delta t} F^{-1},$$ where $e^{\hat L\delta t}$ is the operator that multiplies each Fourier mode by $e^{-k^2\delta t}$ (this depends on choice of normalization).
This is why Fourier transforms are done at each step: in the Fourier basis, and only in that basis, is $L$ diagonal, which makes it trivial to compute its exponential.
For your equation, to get what $e^{B\delta t}$ would look like, you write out the relevant portion of the equation, with only $B$ present: $$ i\hbar u_t = (\gamma |u|^2 + \alpha (|u|^2)_{xx})u. $$ One thing you could do is to approximate $g(x) \approx (|u_0|^2)_{xx}$, so that $$ u(t,x) \approx u(0,x)\exp\left(\frac{\gamma|u(0,x)|^2 + \alpha g(x)}{i\hbar}\,\delta t\right). $$ Because of the nonlinearity, it might work, but I think there isn't a guarantee that it will—I haven't tried it. But the idea is still the same: split the r.h.s. into two operators, and for each operator solve the corresponding PDE, choosing the operators in a way that makes this step easy.
| cite | improve this answer | |
• $\begingroup$ Ok, so you mean to say that I numerically evaluate $\nabla^{2}|\psi|^{2}$ and use it while calculating $exp([|\psi|^{2}+\nabla^{2}|\psi|^{2}]\delta t)$ at each time step?(I have omitted the constants). $\endgroup$ – Abhijit Feb 5 '17 at 7:18
• $\begingroup$ Yes: that's the straightforward extension of the method to this kind of nonlinearity. Mind you: I haven't tried it. $\endgroup$ – Kirill Feb 5 '17 at 17:54
• $\begingroup$ Ok. That seems like a decent idea. Also, I have an analytic expression to compare my results with, so that's one nice thing. Also, I came across a few papers regarding my difficulty yesterday, notable amongst which was 'arxiv.org/pdf/1305.7205.pdf'. You might want to have a look. $\endgroup$ – Abhijit Feb 6 '17 at 6:52
Your Answer
|
784ea57dd878798a | This notebook can be found on github
Entanglement of Two Qubits
Given a system of two qubits (two spin-1/2 particles) that are initially in a separable state (product state), it is necessary to apply a non-local operation in order to create entanglement between the qubits. We can do this by evolving the system with a non-local Hamiltonian, that will then periodically generate entanglement.
Consider two qubits initially in the state
$|\psi_0\rangle = |0\rangle \otimes |1\rangle = |\downarrow\rangle \otimes |\uparrow\rangle.$
If we compute the time evolution of this state with the Hamiltonian
$H = \Omega\left(\sigma^+\otimes \sigma^- + \sigma^-\otimes\sigma^+\right),$
we will see that entanglement is create periodically. The Von Neumann entropy of the reduced density matrix of one of the sub-systems will serve as the measure of the two-qubit entanglement. It is defined as
$S(\rho_\mathrm{red}) = -\mathrm{tr}\left(\rho_\mathrm{red}\log(\rho_\mathrm{red})\right) = -\sum_n\lambda_n\log(\lambda_n),$
where $\lambda_n$ is the $n$th eigenvalue of $\rho_\mathrm{red}$, $\log$ is the natural logarithm and we define $\log(0)\equiv 0$. In our case the reduced density matrix is
$\rho_\mathrm{red} = \mathrm{tr}_{1,2}(\rho),$
where $\rho$ is the density matrix of the entire system and $\mathrm{tr}_{1,2}$ is the partial trace over the first or second qubit. As always, we start by importing the required libraries and define the necessary paramters.
using QuantumOptics
using PyPlot
# Parameters
Ω = 0.5
t = [0:0.1:10;]
Then we proceed to define the Qubit basis as spin-1/2 basis and write our Hamiltonian accordingly.
# Hamiltonian
b = SpinBasis(1//2)
H = Ω*(sigmap(b) ⊗ sigmam(b) + sigmam(b) ⊗ sigmap(b))
Defining the initial state, we can evolve using a Schrödinger equation since there is no incoherent process present.
ψ₀ = spindown(b) ⊗ spinup(b)
tout, ψₜ = timeevolution.schroedinger(t, ψ₀, H)
As explained above, we need the reduced density matrix of one of the Qubits. We therefore take the partial trace and compute the Von Neumann entropy using the implemented function entropy_vn. Note, that the maximal VN entropy is $\log(2)$. Here, we rescale it by this factor, such that $0\leq S \leq 1$.
# Reduced density matrix
ρ_red = [ptrace(ψ ⊗ dagger(ψ), 1) for ψ=ψₜ]
S = [entropy_vn(ρ)/log(2) for ρ=ρ_red]
Finally, we plot the result.
figure(figsize=(6, 3))
plot(tout, S) |
a5051f21ccfc535b | Alternative Health Pioneers
50 Pioneers & Visionaries in Holistic and Alternative Health
Once all of these practices were questioned by the masses. These men and women helped revolutionize their respective fields and push forward the idea of alternative medicine and new practices. While these practices were one time considered outrageous they are now readily accepted by most sources of medicine. We celebrate the top 50 pioneers of Alternative Medicine.
1. Dr. Paul Bragg - Year: 1895-1976 - Fields: Exercise, Nutrition - Country: USA - Dr. Bragg helped bring natural health to the nations attention by advocating for deep breathing, water fasts, organic food, juicing and exercising. Often referred to as the Father of the Health Movement in America,Bragg opened a bevy of health centers in Los Angeles during the mid-1920s. A clear pioneer of todays organic movement, Bragg opened some of the first health food stores in the nation. He also opened health inspired restaurants and was one of the first people in California to open a spa. In 1929, Bragg began touring the country on an extended lecture series regarding health. Eventually he formed The Bragg Health Crusade, a ground-breaking televised health show. The hour long show consisted of exercises, health recipes, demonstrations and interviews with famous health experts. Bragg is also considered instrumental in health cuisine. His biggest contribution may have been focus on exercise, something that was not taken seriously before Dr. Paul Bragg walked the earth.
2. Adele Davis - Year: 1904-1974 - Fields: Nutrition - Country: USA - After graduating from Cal Berkley in 1927, Davis would spend her life drawing attention to nutrition. An educated nutritionist, she began her career working in hospitals and schools in the 1920s. By the 1930s, she relocated to the West Coast, where she worked as a consulting nutritionist. There she advocated for unprocessed foods and spoke out against food additives. Davis was one of the first to publicly criticize the corpitization of food. Davis predicted the increase in food additives, chemicals and GMOs. Warning people far before the mass media publicized the opposition. She was a best selling author, selling over 10 million copies during her life. During her time Davis received a great deal of backlash for her comments regarding nutrition, some of her peers wanted nothing to do with her. While she paid a price for doing the right thing, many of her quotes hang in eternity. A Great deal of sickness is caused by refined foodsshe also stated that We are literally at the mercy of the unethical refined food industry, who take all the vitamins and minerals out of food. She helped popularize the notion that a big breakfast, a medium lunch, and a small dinner were the way to go when it came to daily meals. When it was all over, Davis had a significant impact on the way people think about and understand food.
3. Daniel David Palmer, Year: 1845-1913 - Fields: Chiropractic - Country: USA- In 1895 Doctor David Palmer met Harvey Lillard, a janitor whose hearing was impaired. Palmer claimed the man's hearing was restored after adjusting his spine. He then developed the theory that misalignment of bones in the body was the basic underlying cause of all dis-ease. Palmer thought that the majority of these mis-alignments were in the spinal column. While not entirely accurate, these two theories would one day launch into a worldwide practice. In 1897 he opened the Palmer School of Chiropractic in Davenport and started teaching his techniques. Lawsuits followed and after a brief time in jail, Palmer sold the school to his son. The son expanded the school and the general knowledge of chiropractic. Palmer moved west, opening several new schools in Oklahoma, California, and Oregon. Although the relationship with his son became strained, the two helped to pioneer the Chiropractic practices for the next century.
4. Tirumalai Krishnamacharya Year: 1888-1989 - Fields: Yoga, Ayurveda - Country: India - Krishnamacharya is considered the father of modern yoga in the west. Often credited as the most influential Yoga teacher in the 20th century. In school, he focused his studies on logic and Sanskrit, Krishnamacharya studied with the yoga master Sri Babu Bhagavan Das. Krishnamacharya sought to further his yoga studies by seeking a master named Yogeshwara Ramamohana Brahmachari, who was rumored to live in the mountains beyond Nepal. After 12 weeks of walking, Krishnamacharya arrived at the school, a remote cave at the foot of Mount Kailash. Under Brahmacharis tutelage, Krishnamacharya spent seven and a half years studying the therapeutic aspects of yoga. He was the teacher to renown Yoga experts Bks Iyengar, Pattabhi Jois, T.K.V. Desikachar, A.G. Mohan and Indra Devi. He held degrees in all the six Indian philosophies. The India native is widely considered as the architect of Vinyāsa, the sense of combining breathing with movement. He possessed thorough knowledge of nutrition, herbal medicine, the use of oils, and other remedies. According to Krishnamacharya the source of a disease is in a particular area of the body, could effect many other systems in the body, both mental and physical. He would work with patients on a number of levels including adjusting their diet; creating herbal medicines; and setting up a series of yoga postures that would be most beneficial. He authored four books on Yoga, Yoga Makaranda 1934, Yogaasanagalu 1941, Yoga Rahasya and Yogavalli 1988. Krishnamacharya particularly stressed the importance of combining breath work with the postures of yoga and meditation to reach the desired goal. His yoga instruction reflected his conviction that yoga could be both a spiritual practice and a mode of physical healing. Krishnamacharya approached every student as absolutely unique,in the belief that the most important aspect of teaching yoga was that the student be taught according to his or her individual capacity at any given time”.
5. Joe Weider - Year: 1919-2013 - Fields: Fitness, Nutrition, Weight Lifting - Country: USA - A pioneer of the modern fitness movement, Weider brought strength and fitness to the publics consciousness. At age twelve, Weider purchased two used bars and built a set of barbells from surplus railroad parts, and began training. Two years later he competed in his first amateur contest and lifted more than any competitor in his weight class earning him a national ranking. His goal was to one day publish a magazine that would provide accurate, complete weight training advice. In August 1940, he published the first issue of Your Physique. The magazine was an immediate success, in his 60-plus years in publishing, Weider oversaw a publishing empire that included, Muscle & Fitness, as well as Muscle & Fitness Hers, Flex, Mens Fitness, Shape, Fit Pregnancy, and Natural Health. In 1946, Weider and his brother Ben formed the International Federation of Bodybuilders to promote a healthy lifestyle and organize competitions. In 1965 Weider created the Mr. Olympia contest, and in 1978 he went on to create the Ms. Olympia contest. His greatest contributions to the sport of bodybuilding are the Weider Training Principles. Compiled in 1950, after twelve years of study, these theories and techniques forever changed the understanding of building a strong body.
6. Rudolph Steiner - Year: 1861-1925 - Fields: Organic Farming, Spirituality, Education - Country: Austrian Empire - Attempting to find a synthesis between science and spirituality, Steiner went on a life long journey. His early philosophical work which he termed "spiritual science", he began to apply the thinking characteristic of Western philosophy to spiritual questions. In 1907, he began working with artistic media, drama, the movement arts and architecture. Steiner establish various endeavors, including Waldorf education, biodynamic agriculture and anthroposophical medicine. He based his epistemology on Johann Wolfgang Goethe's world view, in which "Thinking is no more and no less an organ of perception than the eye or ear. Just as the eye perceives colors and the ear sounds, so thinking perceives ideas.” Perhaps his biggest contribution goes to the development of organic farming. His work led to the development of a broad range of complementary medications and supportive artistic and biographic therapies. Homes for children and adults with developmental disabilities based on his work can be found in Africa, Europe, and North America.
7. David E. Smith - Year: 1939-Present - Fields: Addiction Therapy/Drug Abuse - Country: USA - Smith is a medical doctor that specializes in addiction medicine, the psycho-pharmacology of drugs and proper prescribing practices for physicians. He is the Founder of the Haight Ashbury Free Clinics of San Francisco, a Fellow and Past President of the American Society of Addiction Medicine, Past President of the California Society of Addiction Medicine, Past Medical Director for the California State Department of Alcohol and Drug Programs, Past Medical Director for the California Center for Substance Abuse Policy Research, and former adviser to the Betty Ford Center. When Smith started his free Haight-Ashbury clinic it was among the first of its kind in the United States. While at the Haight-Ashbury clinic several important figures in the future of alternative medicine would study under Smith. The doctor brought a real personal care for each and every paint that visited the clinic. He would help reshape parts of health care, drug therapy and overall human care. Smith helped provided free health care to people who simply didn’t have any alternatives or means to their own health care.
8. AmmaMata Amritanandamayi - Year: 1953-Present - Fields: Hinduism, Peace - Country: Indiana - Amma is a Hindu spiritual leader and guru who is revered as a saint by her followers. Growing up, she would bring people of need, food and clothing from her own home. Her family, which was not wealthy, punished her. In 1981, seekers had begun residing at her parents' property in Parayakadavu, in the hopes of becoming her disciples. In 1987, Amma began to conduct programs in countries throughout the world. In 2014, for the first time in history, major Anglican, Catholic, Orthodox Christians, Jewish, Muslim, Hindu, and Buddhist leaders, met to sign a shared commitment against modern-day slavery organized by the Global Freedom Network. In July 2015, Amritanandamayi delivered the keynote speech at a United Nations Academic Impact conference on technology and sustainable development. She stresses the importance of meditation, performing actions as karma yoga, selfless service, and cultivating personal qualities like compassion, patience, forgiveness, self-control, etc. Amma's network of charity organizations, provides food, housing, education, and medical services for the poor. This network has built and/or supported schools, orphanages, housing, and hospitals in over 40 countries. In the United States, the organization has provided soup kitchens and hot showers for the homeless. The hospital located in her home country, offers medical care on a sliding scale, allowing people to pay what they can afford. On September 11, 2015, Amritanandamayi donated $15 million dollars to the Government of India's Namami Gange "Clean the Ganges" program for the constructing toilets for poor families living along the Ganges River. On December 9, 2015, Amritanandamayi donated $736,486 to the flood relief fund established by the Chief Minister of Tamil Nadu. At Amritanandmayi's direction, 500 volunteers from her organization, helped to rescue victims and distributed food, clothing and medicine.
9. Pehr Henrik Ling - Year: 1776-1839 - Fields: Massage, Gymnastics - Country: Sweden - He pioneered the teaching of physical education in Sweden. Ling is also credited as the father of Swedish massage. In 1804 he established a gymnastic institute in Sweden. At that time, Ling began a routine of daily exercise. After discovering his daily exercises had restored his health, Ling decided to apply this experience for other peoples benefit. He saw the potential of adapting techniques to promote better health and thus attended classes in anatomy and physiology. Ling ended up going through the entire curriculum for the training of a medical doctor. He then outlined a system of gymnastics, exercises, and maneuvers divided into four branches: pedagogical, medical, military, and aesthetics. Ling finally obtained government cooperation in 1813, and founded the Royal Gymnastic Central Institute. It was opened for the training of gymnastic instructors in Stockholm, with Ling appointed as principal. Ling invented physical education tools like the box horse, wall bars, and beams. He is also credited with developing calisthenics and free calisthenics. Orthodox medical practitioners were opposed to the claims made by Ling and his disciples. By 1831, Ling was elected a member of the Swedish General Medical Association, which demonstrated that his methods were worthy of professional recognition.
10. Wim Hof - Year: 1959-Present - Fields: Body Work, Extreme Conditioning, Breath Work - Country: Netherlands - Known as the “Iceman”, this extreme athlete is noted for his ability to withstand extreme cold. He attributes this to his Wim Hof Method (WHM) breathing techniques. Hof says that the WHM can treat or help alleviate symptoms of illnesses such as multiple sclerosis, arthritis, diabetes, clinical depression, anxiety, bipolar disorder, and cancer. He set out to spread the potential health benefits of his breathing techniques, working with scientists around the world to prove that his techniques work. A study published by the National Academy of Sciences in the U.S. stated that by consciously hyperventilating, Hof can increase his heart rate, adrenaline levels, and blood alkalinity. Hof holds 26 world records, including one for longest ice bath. In 2007 he climbed to 22,000 ft altitude at Mount Everest wearing nothing but shorts and shoes. In 2008 he broke his previous world record by staying immersed in ice for 1 hour, 13 minutes and 48 seconds. In 2009 Hof reached the top of Mount Kilimanjaro within two days, wearing only shorts. Hof completed a half marathon in Finland, with temperatures close to −4 °F. Dressed in nothing but shorts, Hof finished in 5 hours and 25 minutes. In the same year, Hof ran a full marathon in the Namib Desert without water. There are many variations of the Wim Hof Method. Each cycle goes as follows: take a powerful breath in, fully filling the lungs. Breathe out by passively releasing the breath, but not actively exhaling. Repeat this cycle at a steady rapid pace thirty times. Hof says that this form of hyperventilation may lead to tingling sensations or light-headedness. After completion of the 30 cycles of controlled hyperventilation, take another deep breath in, and let it out completely. Hold the breath for as long as possible. These three phases may be repeated for three consecutive rounds. Hof may help unlock some secrets within the mind for years to come.
11. Samuel Hahnemann - Year: 1755-1843 - Fields: Homeopathy - Country: Germany - Hahnemann was a German physician, best known for creating the system of alternative medicine known as homeopathy. In 1781, Hahnemann was dissatisfied with the state of medicine and objected to practices like bloodletting. He claimed the medicine mostly did the patient more harm than good. After giving up his practice around 1784, Hahnemann researched the causes of medicine's alleged errors. He first used the term homeopathy in his essay Indications of the Homeopathic Employment of Medicines in Ordinary Practice, published in Hufeland's Journal in 1807. His researches led him to realize that the toxic effects of ingested substances are often broadly parallel to certain disease states. His deep research of historical cases of poisoning in the medical literature only strengthened his belief. He first published an article about the homeopathic approach in a German-language medical journal in 1796. Following a series of further essays, he published Organon of the Rational Art of Healing in 1812, the first systematic treatise and containing all his detailed instructions on the subject.
12. Carl Rogers - Year: 1902-1987 - Fields: Psychology - Country: USA - The most influential person in the history of psychology. One of the founders of the humanistic approach (client-centered approach) to psychology. Rogers is widely considered to be one of the founding fathers of psychotherapy research and was honored for his pioneering research with the Award for Distinguished Scientific Contributions by the American Psychological Association in 1956. His versatility in various domains such as psychotherapy and counseling, education, organizations, and other group settings. Rogers believed that for a person to "grow", they need an environment that provides them with genuineness (openness and self-disclosure), acceptance, and empathy. Without these, relationships and healthy personalities will not develop as they should, much like a tree will not grow without sunlight and water. Rogers believed that every person could achieve their goals, wishes, and desires in life. He professed each person was capable of reaching their potential, however a number of factors must be satisfied.
13. Linus Pauling - Year: 1901-1994 - Fields: Biochemistry - Country: USA - Pauling was an American chemist, biochemist, peace activist, author, educator. He published more than 1,200 papers and books. New Scientist called him one of the 20 greatest scientists of all time. Pauling was one of the founders of quantum chemistry and molecular biology. His contributions to the theory of the chemical bond include the first accurate scale of electro negativities of the elements. Pauling also worked on the structures of biological molecules, and showed the importance of the alpha helix and beta sheet in protein secondary structure. Pauling's approach combined methods and results from X-ray crystallography, molecular model building and quantum chemistry. His discoveries inspired the work of James Watson, Francis Crick, and Rosalind Franklin on the structure of DNA. In 1946, he joined the Emergency Committee of Atomic Scientists, led by Albert Einstein. Its mission was to warn the public of the dangers associated with nuclear weapons. Later in his career he promoted nuclear disarmament, as well as orthomolecular medicine, megavitamin therapy and dietary supplements. For his scientific work, Pauling was awarded the Nobel Prize in Chemistry in 1954. For his peace activism, he was awarded the Nobel Peace Prize in 1962. He is one of four individuals to have won more than one Nobel Prize.
14. Vincent Priessnitz - Year: 1799-1851 - Fields: Hydrotherapy - Country: Czech Republic - Priessnitz is generally considered the founder of modern hydrotherapy. He stressed remedies such as suitable food, air, exercise, rest and water, over conventional medicine. Over 1500 patients and 120 doctors arrived to study the new therapy in 1839. A visit by Arch-Duke Franz Carl in October 1845 was greeted with an address extolling the virtues of Priessnitz and his methods, signed by 124 guests. In 1846 Priessnitz was awarded a medal by the Emperor. Preissnitz's practice spread to the U.S. after becoming established in Europe, and several hydropathic medical schools and journals were created in the United States. Some practitioners performed scientific experiments on the effects of known water-cures, and they developed new methods and theories about the field. The usage of extreme temperature was toned down to account for differences in patients' age and condition.
15. Benedict Lust - Year:1872–1945 - Fields: Naturopathic - Country:Germany - Lust was one of the founders of naturopathic medicine movement of the 20th century. He was an instrumental force in the development of holistic methods. As a youth Lust overcame tuberculosis through natural treatments. He journeyed to New York City in 1896 to create a system of healing methods that included dietetics, herbs, massage, electrotherapy, sun baths and other elements of the German nature cure tradition. He graduated from the New York Homeopathic Medical College in 1901. He obtained his osteopathic degree in 1902 from the Universal College of Osteopathy in New York. By 1901, Lust had decided to name his therapies as “naturopathy”. That year he opened the American School of Naturopathy in New York City, the first naturopathic medical school in the world. He went on to establish health resorts in New Jersey and Florida which acted as the Winter Campus for the American School of Naturopathy until 2001. After opening an early health food store, he began publishing several German and English language magazines advocating hydrotherapy and natural cure. Lust also created the American Naturopathic Association, the first national professional organization of naturopathic physicians. In 1918 he published the Universal Naturopathic Encyclopedia for drugless therapy, and also published Natures Path magazine. Amongst all his accomplishments, he became known as the "Father of Naturopathy" in America.
16. Paul Bert - Year: 1833-1886 - Fields: Hyperbaric Chamber, Pressure - Country: France - In the 1800's, hyperbaric chambers became popular throughout Europe. The foundations of hyperbaric medicine were laid in 1872 by Paul Bert, an engineer, physician and scientist. Dr. Bert wrote about the physiological effects of air under increased and decreased atmospheric pressures. His classical work, La Pression barometrique, earned him the biennial prize of 20,000 francs from the Academy of Sciences in 1875. Central nervous system oxygen toxicity was first described in this publication and is sometimes referred to as the "Paul Bert effect”. He showed that oxygen was toxic to insects, arachnids, myriapods, molluscs, earthworms, fungi, germinating seeds, birds, and other animals. He wrote a very successful textbook with Raphael Blanchard Éments de zoologie in 1885. In The Phrenological journal and science of health it was claimed that he held an atheistic belief. Bert’s study would go on to grow into the field of hyperbaric chambers.
17. JR Worsley - Year: 1923-2003 - Fields: Acupuncture - Country: United Kingdom - Worsley is credited with bringing five element acupuncture, also known as traditional acupuncture to the West. Born in the UK, he opened the College of Traditional Chinese Acupuncture, which trained many of the leading five element practitioners practicing today. Worsley was also responsible for starting the Academy for Five Element Acupuncture (AFEA), currently in Gainesville, Florida. This college was non-profit and was led by Dorit Reznik for several years. In later years, Worsley had ties to the acupuncture training school in Boulder, Colorado. Today, his wife, Judy Worsley, carries on the five element acupuncture tradition, training and certifying practitioners in schools she endorses. J. R. Worsley's influence was widely cited by others within the five element tradition, including Peter Eckman, author of In the Footsteps of the Yellow Emperor.
18. John Ernest Sarno Jr. - Year: 1923-2017 - Fields: Pain - Country: USA - Sarno's most notable achievement is the development, diagnosis, and treatment of tension myoneural syndrome (TMS), which is currently not accepted by mainstream medicine. According to Sarno, TMS is a psychosomatic illness causing chronic back, neck, and limb pain which is not relieved by standard medical treatments. He includes other ailments, such as gastrointestinal problems, dermatological disorders and repetitive-strain injuries as TMS related. Sarno states that he has successfully treated over ten thousand patients by educating them on his beliefs of a psychological and emotional basis to their pain. Sarno's books describe two follow-up surveys of his TMS patients. The first in 1982 interviewed 177 patients selected randomly from those Sarno treated in the preceding three years. 76 percent stated that they were leading normal and effectively pain-free lives. A second follow-up study in 1987 restricted the population surveyed to those with herniated discs identified on CT-scans, and 88% of the 109 randomly selected patients stated that they were free of pain one to three years after TMS treatment. Notable patients of Sarno include Howard Stern, Tom Scharpling, Larry David, Anne Bancroft, Terry Zwigoff, John Stossel and Janette Barber. All six have praised Sarno and his work highly. In 2012, Sarno appeared before the U. S. Senate Committee on Health, Education, and Pensions as part of a hearing "Pain in America: Exploring Challenges to Relief".
19. Swami Vivekananda Year: 1863-1902 - Fields: Yoga - Country: India - Modern Yoga in the West gained traction in the late 1890s, when Indian monks began transmitting their knowledge to the Western world. Specifically, the influential Swami Vivekananda is often credited with introducing Yoga to the West. He first demonstrated Yoga postures at a World Fair in Chicago in the 1890s. This generated much interest and laid the grounds for the welcoming of many other Yogis and Swamis from India in the years that followed. He created a lasting impression of Yogic philosophies (Raja Yoga) in the mind of Western audience and also founded Yoga centers for training.
20. Robert Trias - Year: 1923-1989 - Fields: Martial Arts - Country: USA - Robert Trias, a U.S Navy veteran began teaching private lessons in Arizona in the mid-1950s. Trias is sometimes heralded as the father of American Karate, who helped spread the concepts behind martial arts. He is one of the first known American black belts, with Trias even developing his Shuri-ryu karate style that stems from Okinawan martial arts. Trias was introduced to martial arts while serving as a United States Naval Reserve during World War II. While stationed on the Solomon Islands in the mid 1940’s, Trias met Tung Gee Hsiang, a Chinese missionary. Hsiang taught Trias Okinawan Shuri-Te Karate. In late 1945, Trias was training in his backyard, eventually opening the first public karate school run by a Caucasian in Arizona 1946. Trias is commonly credited for opening the first martial arts school in the United States. By 1948, Trias opened the United States Karate Association. It was deemed the first martial arts organization on the American mainland. With the help of his organization, Trias was able to host the first national karate tournament in the United States at the University of Chicago in 1963. Many of the rules he used for subsequent tournament competition are still used today, with slight modifications. In the 1950s judo became required training for personnel serving in the Air Forces Strategic Air Command Division, all thanks to Robert Trias.
21. María Sabina Year: 1894-1985 - Fields: Shamanism - Country: Mexico - Sabina was a Mazatec curandera who lived in the Sierra Mazateca of southern Mexico. Her practice was based on the use of psilocybin mushrooms, such as Psilocybe Mexico. María Sabina was the first contemporary native shaman to allow Westerners to participate in the Healing Vigil that is known as The Velada. All participants in the ritual ingested psilocybin mushroom as a sacrament to open the gates of the mind. The Velada is seen as a purification and a communion with the sacred. In 1955, the US ethnomycologist and banker R. Gordon Wasson visited María Sabina's hometown and participated in the ceremony with her. He collected spores of the fungus, which he identified as Psilocybe mexicana, and took them to Paris. The fungus was cultivated in Europe and its primary ingredient was psilocybin. Youth from the United States began seeking out María Sabina and the "magic" mushrooms as early as 1962. In the years that followed, thousands of counterculture mushroom seekers, scientists, and others arrived in the Sierra Mazateca. By 1967 more than 70 people from the US, Canada, and Western Europe were renting cabins in neighboring villages. Many 1960s celebrities were rumored to have visited Sabina, including rock stars such as Bob Dylan, John Lennon, Mick Jagger and Keith Richards. Eventually, Sabina attracted attention from the Mexican police who believed her to be a drug dealer. The unwanted attention completely altered the social dynamics of her community and threatened to terminate the Mazatec custom. The community blamed Sabina, she was ostracized from her community.
22. Maharishi Mahesh Yogi - Year: 1918-2008 - Fields: Meditation, Hinduism, Yoga - Country: India - Mahesh was an Indian guru with a vast following, known for developing the Transcendental Meditation technique. He became a disciple and assistant of Swami Brahmananda Saraswati, in the Indian Himalayas. The Maharishi credits Brahmananda Saraswati with inspiring his teachings. In 1955, the Maharishi began to introduce his Transcendental Deep Meditation to the world. In 1959, he began his first world tour, writing: "I had one thing in mind, that I know something which is useful to every man”. The Maharishi is reported to have trained more than 40,000 other teachers and Transcendental Meditation technique to "more than 5,000,000” and founded thousands of teaching centers and hundreds of colleges, universities and schools. The first world tour began in Rangoon, Burma and included the countries of Thailand, Malaya, Singapore, Hong Kong and Hawaii. He arrived in Hawaii in the spring of 1959 and the Honolulu Star Bulletin reported: "He has no money, he asks for nothing. His worldly possessions can be carried in one hand.” That same year he began the International Meditation Society and other organizations to propagate his teachings, establishing centers in San Francisco and London. In the same year Maharishi trained Henry Nyburg to be the first Transcendental Meditation teacher in Europe. His 1962 world tour included visits to Europe, India, Australia and New Zealand. The Maharishi, his family and close associates created charitable organizations and for-profit businesses including health clinics, mail-order health supplements and organic farms. In the late 1960s and early 1970s, the Maharishi achieved fame as the guru to the Beatles, the Beach Boys and other celebrities. In 2000, he created the Global Country of World Peace, a non-profit organization, and appointed its leaders. In 1966, the Maharishi founded the Students' International Meditation Society ("SIMS"), which were established at over 1,000 campuses, including Harvard University, Yale University, and UCLA. At the end of 1968, the Maharishi said that after ten years of teaching and world tours, he would return to India.
23. Dr Joseph Pizzorno - Year: 1931-Present - Fields: Natural Medicine - Country: USA - One of the world's leading authorities on science-based natural medicine. A naturopathic physician, educator and researcher, Dr Pizzorno is the founding president of Bastyr University. Under his leadership, Bastyr became the first fully accredited, multidisciplinary university of natural medicine. It is also the first NIH-funded centre for alternative medicine research. Dr Pizzorno has authored many influential books and is the co-author of several landmark publications including the internationally acclaimed Textbook of Natural Medicine and the Handbook of Natural Medicine. He also co-authored Encyclopedia of Natural Medicine, Natural Medicine for the Prevention and Treatment of Cancer and Encyclopedia of Healing Foods.
24. Mrs Margaret Grieve - Year: 1858-1941 - Field: Herbal Medicines, Horticulture - Country: UK - Margaret Grieve was the principal and founder of The Whin’s Medicinal and Commercial Herb School in Buckinghamshire, England. Grieve possessed a litany of knowledge when it came to medicinal plants. As a member of the Royal Horticultural Society she provided experience in all areas of herb growing, collecting, drying and marketing. During WWI as supplies of conventional medicine dwindled, Mrs Grieve produced numerous pamphlets explaining the use of specific herbs as remedies against common illness. In turn this helped solders to stay healthy without classic medicine, thus greatly aiding the war effort. Her 1931 publication A Modern Herbal is still a fixture today among herbal enthusiast. This timeless book has been recognized to this day as one of the best resources for information on medicinal herbs and plants. A Modern Herbal contains extensive information on the therapeutic, culinary, cosmetic and economic properties of herbal medicines. Margaret Grieve has been credited with re-establishing herbal medicines into Britain during the early 1900s and today her work continues to inspire herbalists and natural therapists across the world.
25. Samuel Thompson - Year: 1767-1843 - Fields: Herbalist - Country: USA - Thompson was a herbalist and botanist, best known as the founder of the alternative system of medicine known as "Thomsonian Medicine”. The system enjoyed popularity in the United States during the 19th century. It all began when Thompson sustained a serious ankle injury while working on the family farm at the age of 16. Despite consistent attention from medical professionals, Thompson’s ankle remained in critical condition. So Thompson used his extensive study of Roots, to make a root and turpentine plaster which helped heal his ankle. When Samuel became infected with Measles he turned to herbal medicine to cure himself. Thompson developed the Thompson System with the help of two doctors. It was based upon opening the paths of elimination so that toxins could be removed via physiological processes. This was not unique to Thomson, regular physicians used calomel, a toxic mercury-based compound, to induce vomiting and purgation. Thomson's more moderate and less toxic means to medicate attracted large numbers of followers. At that time, licensed doctors and many of their methods such as bloodletting came under intense scrutiny. Thomson's innovative system was presented as an appealing alternative that allowed individuals to administer their own treatment using natural products. Thomson's movement had affected more than a million Americans and started a medical reformation that would not peak for another 50 years. The brightest medical minds of the time were split vehemently both against and for Thomson's right to practice. Because of the success of Thomson and his followers, states began regulating medical practices along party and class lines.
26. Brian Hanley -Year: 1957-Present - Fields: Gene Therapy - Country: USA - Hanley is an American microbiologist known for using an experimental gene therapy to improve health span. Hanley holds a PhD in Microbiology from University of California, Davis. In 2009 he founded Butterfly Sciences in Davis, to develop a gene therapy to treat HIV using a combination of GHRH and an intracellular vaccine. He has numerous articles and academic publications in epidemiology, biotechnology, and economics. He said that he corresponded with the FDA prior to starting his self-experimentation, and the FDA told him he needed to file and get approval for an Investigational New Drug (IND) application before he tested the plasmid on a person; not having obtained an IND, he proceeded without it.The plasmids were administered twice: once in summer 2015 and a second larger dose in July 2016.
27. Andrew Taylor Still - Year: 1828-1917 - Fields: Osteopathy - Country: USA - Still was the founder of osteopathy and osteopathic medicine. He was also a physician and surgeon, inventor and Kansas state legislator. He was one of the founders of Baker University, the oldest four-year college in the state of Kansas. At the time Still practiced as a physician, medications, surgery and other traditional therapeutic regimens often caused more harm than good. Some of the medicines commonly given to patients during this time were arsenic, castor oil, whiskey and opium. Additionally, unsanitary surgical practices often resulted in more deaths than cures. Still founded the first school of osteopathy based on this new approach to medicine, the school was called the American School of Osteopathy (now A.T. Still University) in Kirksville, Missouri in 1892. Osteopathy set standards in sterilization, surgical practices and other client based procedures.
28. Sophia Delza - Year: 1903-1966 - Fields: Tai Chi - Country: USA - Delza was a modern dancer, choreographer, author, and practitioner of Tai Chi, which she taught at her school in New York City. She authored the first English language book on tai chi, T'ai Chi Ch'uan: Body and Mind in Harmony. Through her books, articles, lectures, and television appearances, Delza promoted the practice of Tai Chi for health and fitness. Delza was one of the earliest popularizers of Chinese martial arts in the United States. In 1954, she gave the first documented public demonstration of Tai Chi in America, at the Museum of Modern Art. That same year, she founded the Delza School of Tai Chi Chuan at Carnegie Hall. She subsequently began teaching Tai Chi as a form of exercise at the United Nations and the Actors Studio. Tai Chi often looks more like slow yoga than judo or karate-two martial arts that involve kicking and grappling. Many people practice Tai Chi as a gentle exercise, without any interest in its martial component. Yet the practice has been translated as “supreme ultimate fist” and great extremes boxing.As practitioners like Sophia Delza understood, Tai Chis slow pace represents control. Mastering the movements allows practitioners to develop strength, balance, and a unity between mind and action.
29. Sebastian Kneipp - Year: 1821-1897 - Fields: Hydrotherapy, Natural Medicine - Country: Germany - A Bavarian priest who began the Nature Cure movement in 1890s. Chiefly known for his contributions to hydrotherapy. Initially Kneiff was inspired by Vincent Priessnitz, a peasant farmer of the Austrian Empire. While serving as the confessor to the monastery, he began offering treatments of hydrotherapy, botanical treatments, exercise and diet to the people who lived in the village. Kneipp began developing his healing methods in 1849 after contracting tuberculosis and experimenting with the water treatments developed by Sigmund Hahn. After being ordained in 1852, he continued to experiment with water treatments in his parish. Kneipps theory asserts that an imbalance in the blood is the root of disease. Kneipp also dabbled in other fields like botanical medicines, exercise, nutrition and balance. His suffering early in life caused Kneipp to develop a deep sympathy for those less fortunate than him. Kneipp expanded the definition of health to include a more holistic view which included mental, social, and spiritual aspects. In 1891, he founded Kneipp Bund, an organization that promotes water healing. In America, Kneipp Societies were founded, which, under the influence of Benedict Lust and changed their name to Naturopathic Society of America. Today there are 600 organizations that are a part of Kneipp Worldwide.
30. PK Warrier - Year: 1921-Present - Fields: Ayurveda. Nutrition - Country: India - Warrier is a renown Ayurvedic physician. He is the chief Physician and Managing trustee of Arya Vaidya Sala. It’s objective is for Ayurveda to be internationally recognized as a scientific system of healthcare. Since Warrier’s time there, Arya Vaidya Sala has become a premier destination for scholars, students and patients. Over 800,000 people a year benefit from free consultations at the hospitals. While practicing Ayurveda as a scientific system of healthcare, he also acknowledges the validity of systems in other fields. He has guided the growth of the Vaidyaratnam P.S. Warrier Ayurveda College for over two decades. He has served as Dean of Ayurveda Faculty, Calicut University and Chairman of its Board of Studies. He was twice elected as president of the All India Ayurveda Congress, once in 1981 and again in 2003. Warrier has helped establish an efficient quality control laboratory to test and certify both raw materials and finished products. A serious advocate of medical ethics, he strongly disapproves all tendencies for medical practitioners to exploit people’s ignorance. He has opposed all trends to commercialize Ayurveda, never compromising his principles.
31. Martin Seligman - Year: 1942-Present - Fields: Positive Psychology - Country: USA- Seligman is a strong promoter of positive psychology and well-being. His theory of learned helplessness is popular among scientific and clinical psychologists. A Review published in 2002, ranked Seligman as the 31st most cited psychologist of the 20th century. Seligman is currently, Family Professor of Psychology in the University of Pennsylvania’s department of Psychology. He was elected President of the American Psychological Association in 1998. Seligman's foundational experiments and theory of "learned helplessness" began at University of Pennsylvania in 1967. Seligman argued that clinical depression and related mental illnesses result in part from a perceived absence of control over the outcome of a situation. Eventually he worked with Christopher Peterson to create what they describe as a "positive" counterpart to the Diagnostic and Statistical Manual of Mental Disorders (DSM). While the DSM focuses on what can go wrong, Character Strengths and Virtues is designed to look at what can go right. Their list includes six character strengths: wisdom/knowledge, courage, humanity, justice, temperance, and transcendence. Each of these has three to five sub-entries; for instance, temperance includes forgiveness, humility, prudence, and self-regulation.
32. C. A. Ansar - Year: 1970-Present - Fields: Reflexology, Yoga, Ayurveda - Country: India - Ansar is a visually impaired practitioner of alternative medicine and the chief consultant at Dr. Ansar's Healing Touch, a healthcare center based in Kochi. He is known for his alternative medical practice which combines the therapeutic techniques of reflexology, yoga, naturopathy and Ayurveda. Ansar did his post-graduate studies in Ayurveda at Sri Jayendra Saraswathi Ayurveda College and Hospital. It was during this period, he was diagnosed with Glaucoma, a disease which affects the optic nerve, eventually leading to blindness. He completed his studies but, he lost his vision completely in 2007. Ansar continued his studies and underwent training in Swedish massage, Sujok therapy, Yoga and Reflexology. After obtaining a practitioner's license from the Indian Board of Alternative Medicines, he started his own private practice, Dr. Ansar's Healing Touch Health Centre. Here, he has trained and employed visually impaired people as therapists. He also delivers motivational speeches at various seminars. The Chavara International Institute for Visually Challenged awarded him the Chavara Excellence Award in 2014.
33. Albert Ellis - Year: 1913-2007 - Fields: Psychology - Country: USA - In 1955 Ellis developed Rational Emotive Behavior Therapy (REBT). He held MA and PhD degrees in clinical psychology from Columbia University. He is considered to be one of the originators of the cognitive paradigm shift in psychotherapy and one of the founders of cognitive-behavioral therapies. Based on a 1982 professional survey of US and Canadian psychologists, he was considered the second most influential psychotherapist in history. Psychology Today noted, "No individual, has had a greater impact on modern psychotherapy." Ellis was advocating a new more active and directive type of psychotherapy. In 1955, he presented rational therapy (RT). In RT, the therapist sought to help the client understand, that his personal philosophy contained beliefs that contributed to his own emotional pain. This new approach stressed to change a client's self-defeating beliefs and behaviors by demonstrating their irrationality, self-defeatism and rigidity. Ellis believed that through rational analysis and cognitive reconstruction, people could understand their core irrational beliefs and then develop rational constructs. By 1957, he formally set forth the first cognitive behavior therapy by proposing that therapists help people adjust their thinking and behavior as a treatment for emotional and behavioral problems. Ellis published his first major book on REBT in 1962. REBT is an active-directive, philosophically and empirically based psychotherapy, the aim of which is to resolve emotional and behavioral problems and disturbances and to help people to lead more fulfilling lives. REBT is seen as the first form of cognitive behavioral therapy.
34. Thich Nhat Hahn - Year: 1926-Present - Fields: Zen - Country: Vietnam - Hahn was a Buddhist monk and peace activist. Nht Hnh published more than 100 books. He is active in the peace movement, promoting nonviolent solutions to conflict. He also refrains from animal product consumption as a means of non-violence towards animals. At the age of 16 he entered the monastery at nearby THiếu Temple. Thích Nht Hnh received training in Vietnamese traditions of Mahayana Buddhism, as well as Vietnamese Thin, and received full ordination as a Bhikkhu in 1951. In 1956 Nht Hnh was named editor-in-chief of Vietnamese Buddhism periodical. In the following years he founded Lá Bi Press, the Vn Hanh Buddhist University in Saigon, and the School of Youth for Social Service. Buddhist peace workers went into rural areas to establish schools, build healthcare clinics, and help rebuild villages. He established two monasteries in Vietnam. Hahn has helped establish monasteries and Dharma centers in California, Vermont, Mississippi, and New York. The masters approach has combined a variety of teachings of Early Buddhism, Mahayana Buddhist traditions of Yogācāra and Zen, and Western psychology to teach Mindfulness of Breathing and the Four Establishments of Mindfulness. Nht Hnh has been a trailblazer in the Buddhism movement, promoting the individual's active role in creating change.
35. Sri Sri Ravi Shankar - Year: 1956-Present - Fields: Breathing, Ayurveda, Meditation - Country: India - Shankar is a renowned Indian spiritual leader. He founded the Art of Living Foundation in 1981, a volunteer-based NGO providing social support to the people. In 1997, he established a Geneva-based charity, the International Association for Human Values, an NGO that engages in relief work and rural development and aims to foster shared global values. Ravi Shankar's first teacher was Sudhakar Chaturvedi, an Indian Vedic Scholar and a close associate of Mahatma Gandhi. He holds a Bachelor of Science degree from the St Joseph's College of Bangalore University. After graduation, Shankar travelled with his second teacher, Maharishi Mahesh Yogi, giving talks and arranging conferences on Vedic science, and setting up Transcendental Meditation and Ayurveda centers. In the 1980s, Shankar initiated a series of courses in spirituality around the globe. He says that his rhythmic breathing practice, Sudarshan Kriya, came to him in 1982 after a ten-day period of silence on the banks of the Bhadra River in Shimoga. In 1983, Shankar held the first Art of Living course in Switzerland. In 1986, he travelled to Apple Valley, California in the US to conduct the first course held in North America. A number of medical studies on its preparatory practices have been published in international peer-reviewed journals. A range of mental and physical benefits are reported in these studies, including reduced levels of stress, improved immune system, relief from anxiety and depression, increased antioxidant protection, and enhanced brain function (increased mental focus, calmness, and recovery from stressful stimuli), among other findings.
36. Roger John Williams - Year: 1893-1998 - Fields: Biochemistry, Nutrition - Country: USA - Williams as an American biochemist who spent his academic career at the University of Texas at Austin. He is known for isolating and naming folic acid and for his roles in discovering pantothenic acid, vitamin B6, lipoic acid, and avidin. He served as the founding director of the Clayton Foundation Biochemical Institute from 1941 to 1963. Williams was elected to the National Academy of Sciences in 1946, and served as the president of the American Chemical Society in 1957. In his later career he spent time writing about the importance of nutrition. Williams is credited for emphasizing the "Biochemical Individuality" of each person with respect to their metabolic makeup and micronutrient needs. His book Biochemical Individuality emphasizes the uniqueness of nutritional requirements from person to person based on their genetic makeup, lifestyle, medical history and diet. Dr. Williams help us all to understand our immune systems much better.
37. Dr Mark Houston - Year: 1929-Present - Fields: Cardiovascular, Hypertension - Country: USA - As a world renowned cardiologist, Dr. Houstons most remarkable contribution may well be his outstanding systems biology approach to cardiovascular disease, dyslipidaemia and hypertension. With an exhaustive knowledge of human nutrition and metabolic medicine, Dr Houston has successfully pointed the way for the integrative and cardiovascular medicine of the future. His research has allowed him to develop a functional approach that addresses the underlying causes of cardiovascular disease, identifying inflammation, oxidative stress and autoimmune dysfunction as the principal factors. Dr Houstons clinical success, as well as his ability to convey his unconventional findings, has seen him selected as one of the Top Physicians in Hypertension in the world. USA Today cited him as one of the most influential doctors in the US in both Hypertension and Hyperlipidemia. He was selected for The Patients Choice Award in 2010-2011 by Consumer Reports USA.
38. Dr John Cunningham Lilly - Year: 1915-2001 - Fields: Deprivation Tank - Country: USA - Lilly’s greatest contribution was that of his deprivation tank. He was a researcher of the nature of consciousness using mainly isolation tanks and psychedelic drugs. During World War II, Lilly researched the physiology of high-altitude flying and invented instruments for measuring gas pressure. After the war, he trained in psychoanalysis at the University of Pennsylvania, where he began researching the physical structures of the brain and consciousness. In 1951 he published a paper showing he could display patterns of brain electrical activity on a cathode ray display screen using electrodes he devised specially for insertion into a living brain. Lilly's work on electrical stimulation of the nervous system gave rise to biphasic charge balanced electrical stimulation pulses, now an established approach to design stimulation in neuroprosthetics. He devised the first isolation tank in 1954, a dark soundproof tank of warm salt water in which subjects could float for long periods in sensory isolation. Lilly and a research colleague were the first subjects of this research. After 10 years of experimentation without taking any psychoactive substances, he tried floating in combination with a psychedelic agent, mostly LSD. According to Lilly, electronics engineered by humans will eventually develop into an autonomous bioform (also known as artificial intelligence). Since the optimal survival conditions for artificial intelligence are drastically different from those humans need, Lilly predicted (based on his ketamine-induced visions) a dramatic conflict between the two forms of intelligence. Today, Deprivation Tanks have seen a serious rise in popularity.
39. Fritz Perls - Gestalt therapy Year: 1893-1970 - Fields: Gesalt Therapy - Country: USA - Fritz Perls, Laura Perls and Paul Goodman came up with Gesalt therapy during the 1950’s. It was first described in the 1951 book Gestalt Therapy. Perls became associated with the Esalen Institute in 1964, and he lived there until 1969. The core of the Gestalt Therapy process is enhanced awareness of sensation, perception, bodily feelings, emotion, and behavior, in the present moment. Relationship is emphasized, along with contact between the self, its environment, and others. In 1933, Fritz Perls, Laura, and their eldest child emigrated to South Africa, where Perls started a psychoanalytic training institute. During the early 1940’s Fritz Perls wrote his first book, Ego, Hunger, and Aggression. After writing Gestalt Therapy in 1951, Fritz and Laura Perls started the first Gestalt Institute in their Manhattan apartment. Fritz Perls began traveling throughout the United States in order to conduct Gestalt workshops and training. In 1960 Fritz Perls left Laura Perls behind in Manhattan and moved to Los Angeles, where he practiced in conjunction with Jim Simkin. In 1963, he started to offer workshops in Big Sur, California. He also traveled to Japan, where he stayed in a Zen monastery. Eventually, he settled at Esalen, and even built a house on the grounds. One of his students at Esalen was Dick Price, who developed Gestalt Practice, based in large part upon what he learned from Perls. At Esalen, Perls collaborated with Ida Rolf, founder of Rolfing Structural Integration, to address the relationship between the mind and the body.
40. Michael Harner - Year: 1929-2018 - Fields: Shamanism - Country: USA - He founded the Foundation for Shamanic Studies and the New Age practice of "Core Shamanism." His 1980 book, The Way of the Shaman: a Guide to Power and Healing, has been significant in the development and popularization of "core shamanism" as a path of personal development. In 1961 he experimented with the Amazonian plant medicine ayahuasca, which he wrote about in the articles "The Sound of Rushing Water" (1968) and "The Role of Hallucinogenic Plants in European Witchcraft" (1973). In 1966, Harner became a professor at Yale and Columbia University. The following year he joined the graduate faculty of The New School for Social Research in New York City. He co-chaired the Anthropology Section of the New York Academy of Sciences. After traveling to the Amazon where he ingested the hallucinogen ayahuasca, Harner began experimenting with monotonous drumming. In 1979 he founded the Center for Shamanic Studies in Norwalk, Connecticut. In 1980, Harner published The Way of the Shaman: a Guide to Power and Healing. Students in the United States and Europe began to take his classes in what he was now calling "core shamanism”. Harner later integrated his Center for Shamanic Studies into the nonprofit Foundation for Shamanic Studies. In 1987, Harner resigned his professorship to devote himself full-time to the work of the foundation.
41. Mikao Usui - Year: 1865-1926 - Fields: Reiki - Country: Japan - Usui was the founder of the spiritual practice known as Reiki, used as a complementary therapy for the treatment of physical, emotional, and mental diseases. It is believed that the aim of Usui's teachings was to provide a method for students to achieve connection with the Universal Life Force energy that would help them in self-development. All of his students received five principles to live by and the students that took him seriously became his devoted followers. In 1994, the original manuscript of Usui was found, which claimed that Reiki had originated from Gautama Buddha. During the early 1920s, Usui did a 21-day practice on Mount Kurama-yama called discipline of prayer and fasting. Common belief dictates that it was during these 21 days that Usui developed Reiki.
42. Andrew Weil - Year: 1942-Now - Fields: Integrative Medicine - Country: USA - Weil is a physician, author, spokesperson and guru of holistic health and integrative medicine. Weil played a seminal role in codifying and establishing the emerging field of integrative medicine. Integrative medicine aims to combine alternative medicine, conventional evidence-based medicine, and other practices into a higher-order system to address human healing. Weil entered Harvard University in 1960, majoring in biology. In that period, met Harvard psychologists Timothy Leary and Richard Alpert, and separately engaged in organized experimentation with mescaline. Weil entered Harvard Medical School and received a medical degree in 1968. He then moved to San Francisco, where he volunteered at the Haight-Ashbury Free Clinic under David E. Smith (#8 on our list). Proper nutrition, exercise, and stress reduction are also emphasized by Weil. An advocate of diets that are rich in organic fruits, organic vegetables, and fish. Weil promotes the use of whole plants as a less problematic approach in comparison to synthetic pharmaceuticals. As of 2015, Weil serves as an academician at the University of Arizona College of Medicine, where he is Professor of Integrative Rheumatology, Clinical Professor of Medicine, and Professor of Public Health. In 1994, Weil founded and directed the Arizona Center for Integrative Medicine. At the center, he started The Weil Integrative Medicine Library, which includes specialty volumes in oncology, cardiology, rheumatology, pediatrics, psychology, and other specialties. He encourages patients to incorporate alternative therapies, use of nutritional supplements, meditation and "spiritual" strategies. Recently featured on the Joe Rogan podcast, Dr. Weil continues to spread the word of alternative medicine. A healing approach which encompasses body, mind, and spirit, have made Dr. Weil a pioneer in the field of integrative medicine.
43. Johann Georg Mezger - Year: 1838-1909 - Fields: Swedish Massage - Country: Netherlands - Per Henril Ling a prominent Swedish medical-gymnastic practitioner influenced by Chinese martial arts and “Tuina” medical techniques, combined Chinese features with early 19th century sports medicine and created his “Medical Gymnastics” system for relieving sore muscles, increasing flexibility and promoting general health. “Swedish Massage” was the term spawned by these combinations. Ling's theories and practice was highly popularized by Dutch physician, Johann Georg Mezger. Mezger simplified the movements based on the gymnastics developed by Ling, and categorized the methods of soft tissue manipulation into four broad technique categories. Later in the late 1800s when Swedish massage gained popularity a fifth technique category was added. Today one of the most common types of massage practiced in the western hemisphere is Swedish massage.
1. Edward Bach - Year: 1886-1936 - Fields: Flower Remedies - Country: USA - The English doctor, bacteriologist, homeopath and writer was best known for his creation of Bach Flower Remedies. The Flower Remedies were a form of medicine inspired by historical homeopathic traditions. He developed the Flower Remedies at the age of 43, while in search for a new healing technique. He spent the spring and summer preparing new flower remedies, in the winter he treated patients free of charge. Rather than being based on the scientific method, Bach's flower remedies were intuitively derived and based on his perceived psychic connections to the plants. While he recognized the role of germ theory in disease, Bach wondered how exposure to a pathogen could make one person sick, while another was unaffected. He hypothesized that illness was the result of a conflict between the purposes of the soul and the personality's actions and outlooks. This internal war leads to emotional imbalance, which eventually leads to physical diseases. Bach's remedies focus on treatment of the patient's personality, which he believed to be the ultimate cause of disease. He had a significant effect on how natural drugs and developed. The thinking behind his work inspired many others to create natural solutions to problems.
2. Milton H. Erickson - Year: 1901-1980 - Fields: Hypnotherapy - Country: USA - Milton Erickson was an American psychiatrist and psychologist who specialized in medical hypnosis and family therapy. His is noted for his approach to the unconscious mind as a creative and solution-generating tool. Erickson is also known for influencing brief therapy, strategic family therapy, family systems therapy, solution focused brief therapy, and neuro-linguistic programming. Erickson was an avid medical student, and he was so curious about psychiatry that he obtained a psychology degree while he was still studying medicine. Erickson's fame and reputation spread rapidly and soon he began holding teaching seminars, which continued until his death. Through conceptualizing the unconscious as highly separate from the conscious mind, with its own awareness, interests, responses, and learnings, he taught that the unconscious mind was creative, solution-generating, and often positive. He was an important influence on neuro-linguistic programming (NLP), which is based on his methods. He believed that the unconscious mind was always listening and that, whether or not the patient was in trance. Typically Erickson would instruct his patients to actively and consciously perform the symptom that was bothering them, usually with some minor or trivial deviation from the original symptom. In many cases, the deviation could be amplified and used as a "wedge" to transform the whole behavior. Erickson would often ensure that the patients had been exposed to an idea, often in a metaphorical form (hidden from the conscious mind) in advance of utilizing it for a therapeutic purpose.
3. Bernie Siegel - Year: 1932-Present - Fields: Patient Care - Country: USA - Siegel is a retired pediatric surgeon, who writes on the relationship between the patient and the healing process. He is known for his best-selling book Love, Medicine and Miracles. As described in a 1989 article in The New York Times, patients "with cancer and such other serious illnesses as AIDS and multiple sclerosis use group and individual psychotherapy, imagery exercises and dream work to try to unravel their emotional distress, which, Dr. Siegel says, strongly contributes to their physical maladies.” The Exceptional Cancer Patients (ECP) non-profit was created to provide resources, professional training programs and interdisciplinary retreats that help people facing the challenges of cancer and other chronic illnesses. Siegel is an Academic Director of the Experiential Health and Healing program at The Graduate Institute in Connecticut. In 1988, Siegel's Love, Medicine and Miracles ranked #9 on The New York Times Best Seller list of hardcover nonfiction books. The book remained on the Times bestseller list for more than a year.
1. Tan Tan Khoon Yong - Year: 1954-Now - Fields: Feng Shui - Country: Singapore - Yong is a Feng Shui grand master from Singapore. In 1984, he established the Way Fengshui Group. He has conducted numerous seminars, including an annual "Chinese Zodiac & Fengshui Seminar”. In 1993, he was appointed the Academic Adviser to the Department of Philosophy at Peking University. He received the Public Service Medal (PBM) in 1999 from then-Singapore President S.R. Nathan. In 2008, he received the title of "Feng Shui Grand Master" from the International Feng Shui Association (IFSA) and is the first feng shui practitioner in Singapore to be awarded this title. He holds the offices of Vice-President of the International Fengshui Association, Vice-Chairman of the Organization for Promoting Global Civilization, and Honorary Council Chairman of the Singapore Association of Writers. Yong has been a speaker for the International Feng Shui Convention since 2004.
2. Ram Dass - Year: 1931-Present - Fields: Enlightenment - Country: USA -
Ram Dass is an American spiritual teacher, former academic and clinical psychologist, and author of many books, including the 1971 book Be Here Now. He is a spiritual teacher and authority of the seminal. He is well known for his relationship with Timothy Leary at Harvard University and his involvement in the LSD movement of the 1960’s. He travelled many times to Indiana, seeking spiritual enlightenment, forming his relationship with Hindu guru Neem Karoli Baba and for founding the charitable organizations Seva Foundation and the Hanuman Foundation. Over the course of his life since the inception of his foundation in 1974, Ram Dass has given all of his book royalties and profits from teaching to his foundation and other charitable causes. The estimated amount of earnings he has given away annually ranges from $100,000 to $800,000.
1. Erwin Schrödinger - Year: 1887-1961 - Fields: Quantum Physics - Country: Austria - Schorodinger was a Nobel Prize-winning physicist who developed a number of fundamental measurements in the field of quantum theory, which formed the basis of wave mechanics. In the first years of his career, Schrödinger became acquainted with the ideas of quantum theory developed in the works of Max Planck, Albert Einstein, Niels Bohr, and others. He formulated the wave equation and revealed the identity of his development of the formalism and matrix mechanics. Schrödinger proposed an original interpretation of the physical meaning of the wave function. In his 1944 book What Is Life? he paid great attention to the philosophical aspects of science, ancient and oriental philosophical concepts, ethics, and religion. Eventually he became a professor at the University of Oxford. In 1935, after working with Albert Einstein, he proposed what is now called the Schrödinger's cat thought experiment. In January 1926, Schrödinger published "Quantisierung als Eigenwertproblem" on wave mechanics and presented what is now known as the Schrödinger equation. This paper has been universally celebrated as one of the most important achievements of the 20th century and created a revolution in most areas of quantum mechanics. These papers were his central achievement and were at once recognized as having great significance by the physics community.
2. Dr. Justin Feinstein - Year: 1900-1998 - Fields: Sensory Deprivation - Country: USA Dr. Feinstein is the director of the Float Clinic and Research Center (FCRC) at the Laureate Institute for Brain Research. The FCRCs mission is to investigate the effects of floatation on both the body and the brain, as well as explore its potential as a therapeutic treatment for promoting healing in patients who suffer from anxiety and PTSD. This open-label trial from Dr. Justin Feinstein's Float Clinic and Research Center at LIBR in 50 patients provides an initial proof-of-principle study showing that 1-hour of float therapy can provide significant short-term relief from symptoms of stress and anxiety across a range of different conditions including Post-Traumatic Stress Disorder (PTSD), Panic Disorder, Agoraphobia, Social Anxiety Disorder, and Generalized Anxiety Disorder. More than resolving symptoms of mental illness, the experience greatly enhanced mental wellness, leaving patients in a peaceful serene state afterwards. This mood-enhancing effect of floatation was especially notable given that most of the patients had combat depression. |
1b4bbb67cb968e06 | A question on an exam asked why there is exactly one sigma bond in double and triple covalent bonds. I looked in my text and online after the exam, but couldn't find an anawer to the question.
Why can there not be more than one sigma bond in a set of covalent bonds?
• $\begingroup$ What atomic orbitals overlap to form a sigma bond? $\endgroup$ – LordStryker Jun 3 '15 at 21:31
• $\begingroup$ @LordStryker, the s and p orbitals overlap to form the orbitals involved in sigma bonds, if that's what you mean. $\endgroup$ – Hal Jun 3 '15 at 21:50
• 5
$\begingroup$ "Why can there not be more than one sigma bond in a set of covalent bonds?" Actually that is quite a profound question. Your thinking is far ahead of your test. The answer to your question is that while introductory texts often display double and triple bonds as one sigma bond and the rest pi bonds, they can also be equivalently described as 2 or 3 "bent" sigma bonds. So double bonds and triple bonds can be described using only sigma bonds or as a mixture of pi and sigma bonds. $\endgroup$ – ron Jun 3 '15 at 21:52
• 4
$\begingroup$ Strictly speaking, you can get two $\sigma$ bonds between the same two atoms, though it is rare. One example is in the gaseous dimolybdenum molecule $\ce{Mo2}$ with its sextuple bond, which has both a $s\! -\! s$ $\sigma$ sigma bond and a $d\! -\! d$ $\sigma$ sigma bond, as well as two $d\! -\! d$ $\pi$ pi bonds and two $d\! -\! d$ $\delta$ delta bonds. I've never heard of a double sigma bond for $\ce{C=C}$, though in some interpretations dicarbon $\ce{C2}$ has two $\pi$ bonds with no $\sigma$ bond. $\endgroup$ – Nicolau Saker Neto Jun 4 '15 at 1:16
• 2
$\begingroup$ This is where chemistry really gets interesting -- when you try to pin a concept down and, after some wild times down the rabbit hole, find out that things are far more subtle and more weird than you ever thought possible. Nearly every concept is initially taught at a (relatively) comprehensible, modest-to-extreme level of approximation. As learning proceeds, the approximations are gradually stripped away until one finally starts to bump up against the limits of human knowledge. $\endgroup$ – hBy2Py Jun 6 '15 at 23:27
Why can there not be more than one sigma bond in a set of bonds?
There can be, even in simple carbon compounds. Bent bonds, tau bonds or banana bonds; whatever you might like to call them were proposed by Linus Pauling; Erich Hückel proposed the alternative $\sigma - \pi$ bonding formalism. Hückel's description is the one commonly seen in introductory texts, but both methods produce equivalent descriptions of the electron distribution in a molecule.
In order to better understand the bent bond model let's first consider its application to cyclopropane and then move to ethylene.
In cyclopropane it has been found that significant electron density lies off the internuclear axis, rather than along the axis.
enter image description here
(image source)
Further, the $\ce{H-C-H}$ angle in cyclopropane has been measured and found to be 114 degrees. From this, and using Coulson's Theorem $$\ce{1+\lambda^2cos(114)=0}$$ where $\ce{\lambda^2}$ represents the hybridization index of the bond, the $\ce{C-H}$ bonds in cyclopropane can be deduced to be $\ce{sp^{2.46}}$ hybridized. Now, using the equation $$\ce{\frac{2}{1+\lambda^_{C-H}^2}+\frac{2}{1+\lambda_{C-C}^2}=1}$$ (which says that summing the "s" character in all bonds at a given carbon must total to 1) we find that $\ce{\lambda_{c-c}^2~=~}$$\mathrm{3.74}$, or the $\ce{C-C}$ bond is $\ce{sp^{3.74}}$ hybridized.
Pictorially, the bonds look as follows. They are bent (hence the strain in cyclopropane) and concentrate their electron density off of the internuclear axis as experimentally observed.
enter image description here
We can apply these same concepts to the description of ethylene. Using the known $\ce{H-C-H}$ bond angle of 117 degrees, Coulson's theorem and assuming that we have one p-orbital on each carbon (Hückel's $\sigma - \pi$ formalism), we would conclude that the carbon orbitals involved in the $\ce{C-H}$ bond are $\ce{sp^{2.2}}$ hybridized and the one carbon orbital involved in the $\ce{C-C}$ sigma bond is $\ce{sp^{1.7}}$ hybridized (plus there is the unhybridized p-orbital). This is the "$\ce{sp^2}$" description we see in most textbooks.
Alternately, if we only change our assumption of one pi bond and one sigma bond between the two carbon atoms to two equivalent sigma bonds (Pauling's bent bond formalism), we would find that the carbon orbitals involved in the $\ce{C-H}$ bond are again $\ce{sp^{2.2}}$ hybridized, but the two equivalent $\ce{C-C}$ sigma bonds are $\ce{sp^{4.3}}$ hybridized. We have constructed a two-membered ring cycloalkane analogue!
enter image description here
Hybridization is just a mathematical construct, another model to help us understand and describe molecular bonding. As shown in the ethylene example above, we can mix carbon's atomic s- and p-orbitals in different ways to describe ethylene. However, the two different ways we mixed these orbitals
• one s-orbital plus two p-orbitals plus one unmixed p-orbital (Hückel's $\sigma - \pi$), or
• two s-orbitals plus 2 p-orbitals (Pauling's bent bond)
must lead to equivalent electronic descriptions of ethylene. As noted in the Wikipedia article (first link at the beginning of this answer),
There is still some debate as to which of the two representations is better,[10] although both models are mathematically equivalent. In a 1996 review, Kenneth B. Wiberg concluded that "although a conclusive statement cannot be made on the basis of the currently available information, it seems likely that we can continue to consider the σ/π and bent-bond descriptions of ethylene to be equivalent.2 Ian Fleming goes further in a 2010 textbook, noting that "the overall distribution of electrons [...] is exactly the same" in the two models.[11]
The bent bond model has the advantage of explaining both cyclopropane (and other strained molecules) as well as olefins. The strain in cyclopropane and ethylene (e.g. heats of hydrogenation) also make intuitive sense with the bent bond model where the term "bent bond" conjures up an image of strain.
So, bent bonds are a mix of both sigma and pi properties and multiple bent bonds can be used to describe the bonding between adjacent atoms.
• $\begingroup$ The last statement (in bold) is technically incorrect, because a bent bond is by definition something uniquely different different from a sigma bond. $\endgroup$ – Martin - マーチン Sep 13 '15 at 14:38
• 1
$\begingroup$ @Martin-マーチン I've thought some more about your comment. I've always thought of a bent bond as just another sigma bond that lies along the continuum from pure s to pure p. For me, the differentiating factor between sigma and pi bonds has been the number of areas of electron overlap. In in a sigma bond there is one such area, while there are two such areas in a pi bond. What definition are you using that excludes a bent bond from being considered as a sigma bond? $\endgroup$ – ron Sep 13 '15 at 18:59
• 1
$\begingroup$ And I thought about your comment, too. The separation in sigma, pi, ... and bent bond are just two completely different approaches to one problem. Hückel's approach preserves orbital symmetry, while tau bonds don't. I don't have a clear grasp about bonds one way or the other - both are simplifications-, that is why I prefer the delocalised picture - which does not necessarily give the correct answer either. So I can only rely on the definitions of the underlying description, which are different, hence sigma bonds and bent bonds cannot be the same. When it comes to bonds, it gets philosophical. $\endgroup$ – Martin - マーチン Sep 14 '15 at 7:10
• $\begingroup$ I would like to duscuss a sort of not comfortable feeling that the depiction of ethylene in terms of bent bonds convey to me, at least at a first glance. It seems to me that the two equivalent bonds leave the molecule more susceptible of a total breaking, at least if I could imagine two addition reactions taking place at the some moment, or if simply matematically I deliver the necessary amount of energy to break apart the" two bonds of the double bond". This might be due to my way of thinking of sigma bonds as skeletons along with distribution of more mobile and diffuse pi electrons occurs $\endgroup$ – Alchimista Jul 13 '17 at 22:46
• $\begingroup$ Continuing : in particular because this is a quite working pictures in my field, where extended pi-conjugated systems are the working horses. On the other hand, from a vibrational point of view, the bent bond picture is easily visualised as two springs having the same force constant, while in the pi-sigma treatment it is like one spring is there loose and waiting for the other one to break before itself starts start oscillate. As you see this is pictorial but I think it convey what I like to express (without doubting of the value and o of equivalence of both treatments). $\endgroup$ – Alchimista Jul 13 '17 at 22:56
I’m not sure if this answering attempt is correct in the light of Mithoron’s and ron’s comments on your question, but this is the way I learnt it, so if this is wrong I will at least learn something, too.
We all know what s-, p- and d-orbitals look like, but what is the significance, and why do these orbitals preferentially form $\sigma$, $\pi$ and $\delta$ bonds, respectively?
Mathematically spoken, orbitals are functions of the hydrogen atom that solve the Schrödinger equation. The model in question is a non-rigid rotor,* i.e. the rotor’s axis is not fixed in any spatial direction (the electron can rotate freely around the nucleus). For solving this equation, it is helpful to use polar coordinates $(r, \varphi, \theta)$, mainly because the solution can be split into a radial factor (dependent only on $r$) and angular factors (dependent on $\varphi$ and $\theta$).
$$\Psi (r, \varphi, \theta) = R(r) \cdot Y(\varphi, \theta)$$
$R(r)$ can be thought of giving an orbital its extension into space while $Y(\varphi, \theta)$ gives it its shape. Both functions depend heavily on quantum numbers: $R(r)$ does so for $n$ and $l$ while $Y(\varphi, \theta)$ depends on $l$ and $m_l$. For the simplest case ($l = 0; m_l = 0$, s-orbital), $Y (\varphi, \theta)$ degenerates to a simple constant, meaning that the orbital will have a totally symmetrical spherical shape. $l = 1$, (the p-orbital) while loosing spherical symmetry, still keeps total symmetry with respect to one axis, i.e. every slice you take through that orbital perpendicular to the axis of symmetry will be a circle. Higher quantum numbers lose more symmetry but it’s not always as easily visualised, so I’ll stick with these.
But you were talking about bonds, where do they come into play? Well, bonds also have a symmetry, but they also have an axis instead of a nucleus, so their symmetry will be reduced per se. The simplest symmetry along a bond axis is total rotational symmetry around the bonds axis. I hope you see the similarity between the s-orbital (total symmetry around a central point) and a $\sigma$ bond (total symmetry around the bond’s central axis). Similarly, a $\pi$ bond will always have one degree of symmetry less, which turns out to mean ‘having a plane of symmetry that includes the bond axis’. And a $\delta$ bond will have two planes of symmetry — yet another degree of symmetry less.
According to this definition, an orbital that can take part in a $\sigma$ bond needs to have full rotational symmetry along the bond’s axis. That means, that there is only one, at most two orbitals that fulfil the criterion (but if there are two, one is going to be an unmodified s-orbital and likely not take part in bonding at all). Therefore, only one $\sigma$ bond would be possible between two atoms.
Writing this up, I remembered the ‘banana bonds’ that were introduced to us to explain the extremely small ($60°$) bond angles in $\ce{P4}$. I would need to go back, recheck and rethink what I would think of those and if I would treat them as exceptions of this ‘rule’ or simply as special cases that need additional information to be discussed. They certainly deserve consideration, as they are, de facto $\sigma$ bonds from the way they look, even though they do bend.
An interesting comment was left on the question pointing to sextuple bonds. I didn’t know bonds of that order existed; my knowledge was stuck at 4. For a quadruple bond, possible between certain transition metals such as in $\ce{[Re2Cl8]^2-}$, four of the five d-orbitals form a bond to the other metal; one being $\sigma$, two being $\pi$ and a fourth one of $\delta$ type (to planes of symmetry). Extending that to a quintuple bond by adding a second $\delta$ layer with the last remaining pair of d-orbitals isn’t hard.
The sextuple bond – eg $\ce{Mo2}$ — derives from an additional $\sigma$ bond between the s-orbitals of the higher shell. You thereby solve a problem you would otherwise have: The $4\mathrm{d}_{z^2}$ orbital can take part in $\sigma$ bonding along the $z$-axis; and the higher $5\mathrm{s}$ orbital is more diffuse, extends further into space and therefore is still able to form a contact to the neighbouring atom’s counterpart. Because it is more or less a sphere, it can only form $\sigma$ bonds.
* I don’t think this is the model’s correct name. In my German quantum chemistry class, the rigid rotor was a raumstarrer Rotator and thus the model here was a raumfreier Rotator. Somebody who might know the proper name please comment (or edit).
• $\begingroup$ As for the name, what I was taught in QM was simply "3D rigid rotor". $\endgroup$ – orthocresol Jun 5 '15 at 23:42
• $\begingroup$ @orthocresol So that would mean that ‘rigid’ does not refer to an axis of rotation being ‘rigid’ with respect the the surrounding space, because otherwise ‘3D rigid’ doesn’t make sense at all? $\endgroup$ – Jan Jun 5 '15 at 23:56
• $\begingroup$ I think the definition of "rigid" is that $r$ is a constant. Which doesn't seem to make sense either... $\endgroup$ – orthocresol Jun 6 '15 at 0:03
• $\begingroup$ Maybe it's because the potential in an atom is purely a function of $r$, so the angular wavefunctions are similar in form to the solutions of the rigid rotor model (where $r$ is fixed). I am actually rather out of my depth here so maybe someone else can come along and answer. BTW Atkins calls it a "particle on a sphere" - these give rise to the spherical harmonics $Y(\theta,\phi)$ which are the angular part of the wavefunctions. $\endgroup$ – orthocresol Jun 6 '15 at 0:17
Short answer
The main thing that determines the shape of orbitals is that they must have zero net overlap with all other orbitals (i.e. that they are orthogonal). It's quite hard to construct two orthogonal $\sigma$-bonding MOs connecting the same two atoms.
More details
$\sigma$-bonding MOs are typically $sp^x$ hybridized orbitals with maximum overlap along the line connecting two atomic centers
a typical sigma bonding MO
It's hard to construct another bonding MO with maximum overlap along the same line that is orthogonal to this MO. To stay orthogonal we have introduce a node and the next $\sigma$ MO becomes anti-bonding
enter image description here
The usual way to get another, orthogonal, bonding MO is a $\pi$-bond.
enter image description here
If you have valence $d$-electrons available then it is possible to construct another, orthogonal, $\sigma$-orbital
enter image description here
However, it is still rare to have two $\sigma$ bonding MOs because transition metals tend to loose the $s$-electrons needed for one of the $\sigma$-MOs very readily. So double $\sigma$-bonding tend to be only observed in special cases such as gas phase $\ce{Mo2}$.
• $\begingroup$ It is why to my modest opinion a po-sigma is more straightforward as compared to tau approach in case of ethylene. However it is nice to remember that it is matter of what orbitals one combines. $\endgroup$ – Alchimista Jul 13 '17 at 22:34
Your Answer
|
829f74bbfd0b9fd1 | Take the 2-minute tour ×
I have seen many questions on SE on the dual nature of electrons behaving in certain circumstances as particles and as waves in some other circumstance. There is one thing I couldn't get a clear answer on.
When making double slit experiment, we all agree that the electrons behave as waves. The same is true in atoms, where electron levels are described by Schrödinger equation. However, if we speak about a field like plasma physics (my field of work) and maybe beam physics, electrons are treated classically as particles with applying Newton's equation to describe their motion. The models built on particle treatment of electrons show an excellent agreement with experimental results.
From experimental results and testing, we know that electrons behave like waves (in double slit experiment) or as particles (gas discharge models). My question is, is experimenting the only way to decide which model (wave/particle) describes electrons better in particular circumstances? Isn't there any theoretical frame that decides whether electrons will behave as particles or wave in particular circumstance??
For the record, in plasma physics the strongest type of theoretical models is called Particle In Cell models (PIC). In those models Newton equation of motion is solved for a huge number of particles including electrons. Then the macroscopic properties are determined by averaging. This method although it treats electrons classically it is very successful in explaining what happens in experemints
share|improve this question
MaxGraves' answer is pretty much what I was going to write. Just to add a conceptual/terminological point that always annoys me personally: electrons always behaves as quantum particles, which is never the same thing as a classical wave or classical particle. It is not a wave one day and a particle the next. It is always one thing, but that thing is not perfectly analogous to anything classical. – Michael Brown Sep 30 '13 at 17:08
So the whole wave/particle duality thing is approximate at best and highly misleading at worst. The rules of quantum mechanics simply work without any extra input about whether today is a "wave day" or a "particle day."[/rant over] Classical mechanics arises as the so called "geometrical optics" approximation to quantum mechanics, if you want to look that up. – Michael Brown Sep 30 '13 at 17:09
1 Answer 1
up vote 4 down vote accepted
When we treat quantum mechanical objects as if they are particles, this is often referred to as a classical treatment. Intuitively, this is going to be valid based on a simple argument related to the de Broglie wavelength:\begin{equation} \lambda_{dB} = \sqrt{\dfrac{2 \pi \hbar^2}{m k_B T}}.\end{equation} Most often, when this wavelength is on the order of interatomic (or inter-'object') spacing, then quantum mechanical effects become quite relevant and one must consider the wave-like nature of matter. For wavelengths much smaller than the distance between atoms (or molecules, elementary particles, etc..) quantum effects will be negligible and the classical treatment works just fine. You can notice that $\lambda_{dB}$ is a function of both the mass of the object and the temperature, so making either of these larger while the other is constant will decrease the deBroglie wavelength.
You work in plasma physics so this wavelength will most often be very small due to the high temperatures even for very 'light' entities such as electrons. As such you need not consider the wave-like properties of the electron to make accurate calculations of certain physical properties of the system. Electrons are negatively charged and because of the Coulomb repulsion, I would suspect that no matter how much energy they have they will not be a distance apart that is on the order of this wavelength. I study low-temperature condensed matter though most often, so I may be wrong about this spacing.
Hope this helps give some intuitive picture of when the classical treatment is acceptable without having to refer to empirical evidence.
share|improve this answer
Michael Brown: I absolutely agree with you about the fact that wave/particle duality never takes a vacation and that when we work in the classical approximation, it is not because our electrons have somehow become particles rather than waves. My answer was really to illuminate the fact that one needs to determine which character is more dominant in order to effectively model systems. Also, can you please accept my answer so that I get >50 rep points, I am new and would like to be able to leave comments!! – MaxGraves Sep 30 '13 at 17:43
Thanks for the information @MaxGraves , I did some calculation for de Broglie wavelength of electrons with temperature of 300 K which gave 6 nm approximately. The latest double slit experiment was done at University of Nebraska-Lincoln where they used a slit width of 62 nm. Don't you think that having a slit width 10 times larger than electron wavelength should make the electron behave classically? – Gotaquestion Sep 30 '13 at 20:29
Hmm, no this is only one order of magnitude larger. Besides that point, this goes back to what @Michael Brown said, which was that you never have strictly one or the other. You are forced by nature to realize that there are wavelike properties and particle like properties to everything. As far as I'm aware, one should expect the diffraction from the double slit to occur regardless of temperature. My statement was more about in what regime can you treat particles as classical and get somewhat reasonable results from calculations of expected physical observables such as energy, etc... – MaxGraves Sep 30 '13 at 21:33
Your Answer
|
3c2afb28feab66e3 | Sturm–Liouville problem, Sturm-Liouville problemor eigenvalue problemin mathematics, the determination of the set of values of the constants in the solution of a given second-order differential equation that make the solution satisfy not only the differential equation but also a set of specified auxiliary conditions usually called boundary values (see boundary value). The principles of solving this problem were established by the a certain class of partial differential equations (PDEs) subject to extra constraints, known as boundary values, on the solutions. Such equations are common in both classical physics (e.g., thermal conduction) and quantum mechanics (e.g., Schrödinger equation) to describe processes where some external value (boundary value) is held constant while the system of interest transmits some form of energy.
In the mid-1830s the French mathematicians Charles-François Sturm and Joseph Liouville
in the 1830s; in the 20th century those principles have been applied in the development of quantum mechanics, as in the solution of the Schrödinger equation and its boundary values.
A simple example of such a problem is finding a solution y(x) to the equation y″ + c2y = 0 such that the function equals zero if x is equal to 0 or some number a. The function y = sin cx satisfies the equation, but it meets the auxiliary conditions only if c = ±nπ/a, in which n = 0, 1, 2, . . . .
These problems are also called eigenvalue problems and involve more generally the problem of finding a solution of the equation independently worked on the problem of heat conduction through a metal bar, in the process developing techniques for solving a large class of PDEs, the simplest of which take the form [p(x)y′]′ + [q(x) - k − λr(x)]y = f(x) that satisfies the auxiliary conditions a1y(a) + a2y′(a) = 0 and a3y(b) + a4y′(b) = 0, in which a1, a2, a3, and a4 are constants. To determine when this equation has a solution, the related homogeneous equation is first considered; i.e., the equation with the function f(x) equal to zero = 0 where y is some physical quantity (or the quantum mechanical wave function) and λ is a parameter, or eigenvalue, that constrains the equation so that y satisfies the boundary values at the endpoints of the interval over which the variable x ranges. If the functions p, q, and r satisfy suitable conditions, then, as in the simpler example above, the equation will have a family of solutions, called eigenfunctions, corresponding to certain values of k, called eigenvalues. Then, if the value of k in the original nonhomogeneous equation is different from these eigenvaluesthe eigenvalue solutions.
For the more-complicated nonhomogeneous case in which the right side of the above equation is a function, f(x), rather than zero, the eigenvalues of the corresponding homogeneous equation can be compared with the eigenvalues of the original equation. If these values are different, the problem will have a unique solution. If k equals On the other hand, if one of these eigenvalues matches, the problem will have either no solution or a whole family of solutions, depending on the properties of the function f(x). |
8cc43503c9d83b6f |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
The non-endpoint Strichartz estimates for the (linear) Schrödinger equation: $$ \|e^{i t \Delta/2} u_0 \|_{L^q_t L^r_x(\mathbb{R}\times \mathbb{R}^d)} \lesssim \|u_0\|_{L^2_x(\mathbb{R}^d)} $$ $$ 2 \leq q,r \leq \infty,\;\frac{2}{q}+\frac{d}{r} = \frac{d}{2},\; (q,r,d) \neq (2,\infty,2),\; q\neq 2 $$ are easily obtained using (mainly) the Hardy-Littlewood-Sobolev inequality, the endpoint case $q = 2$ is however much harder (see Keel-Tao for example.)
Playing around with the Fourier transform one sees that estimates for the restriction operator sometimes give estimates similar to Strichartz's. For example, the Tomas-Stein restriction theorem for the paraboloid gives: $$ \|e^{i t \Delta/2} u_0\|_{L^{2(d+2)/d}_t L^{2(d+2)/d}_x} \lesssim \|u_0\|_{L^2_x}, $$ which, interpolating with the easy bound $$ \|e^{i t \Delta/2} u_0\|_{L^{\infty}_t L^{2}_x} \lesssim \|u_0\|_{L^2_x}, $$ gives precisely Strichartz's inequality but restricted to the range $$ 2 \leq r \leq 2\frac{d+2}{d} \leq q \leq \infty. $$
As far as I know, the Tomas-Stein theorem (for the whole paraboloid) gives the restriction estimate $R_S^*(q'\to p')$ for $q' = \bigl(\frac{dp'}{d+2}\bigr)'$ (this $q$ is different from the one above), so I'm guessing that this cannot be strengthened (?).
So my question is: what's the intuition of what goes wrong when trying to prove Strichartz's estimates all the way down to the endpoints using only Fourier restriction theory?
share|cite|improve this question
up vote 3 down vote accepted
From my less-than-expert (where's Terry when you need him?) point of view, a possible reason seems to be the following (I wouldn't call it something going wrong or even a difficulty):
The statement of restriction estimates only give you estimates where the left hand side is an isotropic Lebesgue space, in the sense that you get an estimate $L^q_tL^r_x$ with $q = r$. This naturally excludes the end-point, which requires $r > q$.
Why is this? The reason is that the restriction theorems only care about the local geometry of the hypersurface, and not its global geometry. (For example, the versions given in Stein's Harmonic Analysis requires either the hypersurface to have non-vanishing Gaussian curvature for a weaker version, or that the hypersurface to be finite type for a slightly stronger version. Both of these conditions are assumptions on the geometry of the hypersurface locally as a graph over a tangent plane.) Now, on each local piece, you do have something more similar to the classical dispersive estimates with $r > q$, which is derived using the method of oscillatory integrals (see, for example, Chapter IX of Stein's book; the dispersive estimate (15) [which has, morally speaking $q = r = \infty$ but with a weight "in $t$", so actually implies something with $q < \infty$] is used to prove Theorem 1, which is then used to derive the restriction theorem). But once you try to piece together the various "local" estimates to get an estimate on the whole function, you have no guarantee of what the "normal direction" is over the entire surface. (The normal direction, in the case of the application to PDEs, is the direction of the Fourier conjugate of the "time" variable.) So in the context of the restriction theorem, it is most natural to write the theorem using the $q = r$ version, since in the more general context of restriction theorems, there is no guarantee that you would have a globally preferred direction $t$.
(Note that Keel-Tao's contribution is not in picking out that time direction: that Strichartz estimates can be obtained from interpolation of a dispersive inequality and energy conservation is well known, and quite a bit of the non-endpoint cases are already available as intermediate consequences of the proof of restriction theorems. The main contribution is a refined interpolation method to pick out the end-point exponents.)
share|cite|improve this answer
Your Answer
|
fc110992f41d9187 | Sign up ×
Particle in ring is a well-known example where a solution of the Schrodinger equation exists. My question is: In principle we also want that $\psi'(\theta) = \psi'(\theta + 2\pi)$. The thing is that this condition is never explicitly stated ( probably because it is fulfilled anyway, but in principle we would also need this condition, right?
share|cite|improve this question
Yes, we expect the condition to hold for a ring. – JamalS Apr 29 '14 at 8:42
What is $\psi'$? – John Rennie Apr 29 '14 at 8:43
@JohnRennie the first derivative of the solution to the wavefunction. – Xin Wang Apr 29 '14 at 8:50
Oops, yes, of course. Sorry for the silly question :-) – John Rennie Apr 29 '14 at 8:52
The condition $\psi'(\theta) = \psi'(\theta + 2\pi)$ is a consequence of $\psi(\theta) = \psi(\theta + 2\pi)$ whenever $\psi$ is differentiable. – Qmechanic Apr 29 '14 at 9:50
1 Answer 1
up vote 3 down vote accepted
A particle in a ring corresponds to a configuration space $S^{1}$ which is simply a circle. The solution to the Schrödinger equation is given by (in natural units):
$$\psi_{\pm} = \frac{1}{\sqrt{2\pi}}e^{\pm ir \sqrt{2mE}\theta}$$
Clearly, we must identify $\theta$ with $\theta +2\pi n$. Differentiating the solution yields,
$$\psi_{\pm}' =\pm ir \sqrt{\frac{mE}{\pi}}e^{\pm ir \sqrt{2mE}\theta}$$
The function $\psi'_{\pm}$ differs by $\psi_{\pm}$ only by a constant, hence it is also periodic in $\theta$ with period $2\pi$, i.e.
$$\psi'_{\pm}(\theta)=\psi'_{\pm}(\theta+2\pi )$$
share|cite|improve this answer
Your Answer
|
3e30a187ad3e60d4 | Psychology Wiki
Computational chemistry
Revision as of 07:31, July 30, 2010 by Dr Joe Kiff (Talk | contribs)
34,200pages on
this wiki
Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Schrödinger equation. It is, in principle, possible to solve the Schrödinger equation, in either its time-dependent form or time-independent form as appropriate for the problem in hand, but this in practice is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Present computational chemistry can routinely and very accurately calculate the properties of molecules that contain no more than 10-40 electrons. The treatment of larger molecules that contain a few dozen electrons is computationally tractable by approximate methods such as density functional theory (DFT). There is some dispute within the field whether the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated with classical mechanics in methods called molecular mechanics.
Several major areas may be distinguished within computational chemistry:
• The prediction of the molecular structure of molecules by the use of the simulation of forces to find stationary points on the energy hypersurface as the position of the nuclei is varied.
• Computational approaches to help in the efficient synthesis of compounds.
Molecular structureEdit
A given molecular formula can represent a number of molecular isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (electronic energy plus repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is when the derivative of that energy with respect to all displacements of the nuclei is zero. A local minimum is when all such displacements lead to an increase in energy. The local energy that is lowest is called the global energy and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimisation.
The determination of molecular structure by geometry optimisation became routine only when efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is assumed. In some ways more importantly it allows the characterisation of stationary points. The frequencies are related to the eigenvalues of the matrix of second derivatives (the Hessian matrix). If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (an imaginary frequency), the stationary point is a transition structure. If more than one eigenvalue is negative the stationary point is a more complex one and is of little interest. If found, it is necessary to move the search away from it to continue looking for local minima and transition structures.
Ab initio methodsEdit
Main article: Ab initio quantum chemistry methods
The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations - being derived directly from theoretical principles, with no inclusion of experimental data - are called ab initio methods. This does not imply that the solution is an exact one. They are all approximate quantum mechanical calculations. It means that a particular approximation is carefully defined and then solved as exactly as possible. If numerical iterative methods have to be employed, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer).
Electron correlation
The simplest type of ab initio electronic structure calculation is the Hartree-Fock (HF) scheme, in which the Coulombic electron-electron repulsion is not specifically taken into account. Only its average effect is included in the calculation. As the basis set size is increased the energy and wave function tend to a limit called the Hartree-Fock limit. Many types of calculations, known as post-Hartree-Fock methods, begin with a Hartree-Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. In order to obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are only really important for heavy atoms. In all of these approaches, in addition to the choice of method, it is necessary to chose a basis set. This is set of functions, usually centred on the different atoms in the molecule, which are used to expand the molecular orbitals with the LCAO ansatz. Ab initio methods need to define a level of theory (the method) and a basis set.
Example: Is Si2H2 like acetylene (C2H2)?Edit
A series of ab initio studies of Si2H2 shows clearly the power of ab initio computational chemistry. They go back over 20 years, and most of the main conclusions were reached by 1995. The methods used were mostly post-Hartree-Fock, particularly Configuration interaction (CI) and Coupled cluster (CC). Initially the question was whether Si2H2 had the same structure as ethyne (acetylene), C2H2. Slowly (because this started before geometry optimization was widespread), it became clear that linear Si2H2 was a transition structure between two equivalent trans-bent structures and that it was rather high in energy. The ground state was predicted to be a four-membered ring bent into a 'butterfly' structure with hydrogen atoms bridged between the two silicon atoms. Interest then moved to look at whether structures equivalent to vinylidene - Si=SiH2 - existed. This structure is predicted to be a local minimum, i. e. an isomer of Si2H2, lying higher in energy than the ground state but below the energy of the trans-bent isomer. Then surprisingly a new isomer was predicted by Brenda Colegrove in Henry F. Schaefer, III's group[1]. This prediction was so surprising that it needed extensive calculations to confirm it. It requires post Hartree-Fock methods to obtain a local minimum for this structure. It does not exist on the Hartree-Fock energy hypersurface. The new isomer is a planar structure with one bridging hydrogen atom and one terminal hydrogen atom, cis to the bridging atom. Its energy is above the ground state but below that of the other isomers[2]. Similar results were later obtained for Ge2H2 [3] and SiGeH2 [4]. More interestingly, similar results were obtained for Al2H2[5] (and then Ga2H2 [6] and AlGaH2)[7] which have two electrons less than the Group 14 molecules. The only difference is that the four-membered ring ground state is planar and not bent. The cis-mono-bridged and vinylidene-like isomers are present. Experimental work on these molecules is not easy, but matrix isolation spectroscopy of the products of the reaction of hydrogen atoms and silicon and aluminium surfaces has found the ground state ring structures and the cis-mono-bridged structures for Si2H2 and Al2H2. Theoretical predictions of the vibrational frequencies were crucial in understanding the experimental observations of the spectra of a mixture of compounds. This may appear to be an obscure area of chemistry, but the differences between carbon and silicon chemistry is always a lively question, as are the differences between group 13 and group 14 (mainly the B and C differences). The silicon and germanium compounds were the subject of a Journal of Chemical Education article[8].
Density Functional methodsEdit
Main article: Density functional theory
Semi-empirical and empirical methods Edit
Main article: Semi-empirical quantum chemistry methods
Molecular mechanics Edit
Main article: Molecular mechanics
In many cases, large molecular systems can be modelled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use a single classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.
Interpreting molecular wave functionsEdit
Computational chemical methods in solid state physicsEdit
Main article: Computational chemical methods in solid state physics
Chemical dynamicsEdit
Software packages Edit
A number of self-sufficient software packages include many quantum-chemical methods, and in some cases molecular mechanics methods. The following table illustrates the capabilities of the most versatile software packages that show an entry in two or more columns of the table. There are separate lists for specialized programs, such as:-
PackageMolecular MechanicsSemi-EmpiricalHartree-FockPost-Hartree-Fock methodsDensity Functional Theory
See also Edit
Cited References Edit
1. Golegrove, B. T., Schaefer, Henry F. III (1990). Disilyne (Si2H2) revisited. Journal of Physical Chemistry 94: 5593.
2. Grev, R. S., Schaefer, Henry F. III (1992). The remarkable monobridged structure of Si2H2. Journal of Chemical Physics 97: 7990.
3. Palágyi, Zoltán, Schaefer, Henry F. III, Kapuy, Ede (1993). Ge2H2: A Molecule with a low-lying monobridged equilibrium geometry. Journal of the American Chemical Society 115: 6901 - 6903.
4. O'Leary, P., Thomas, J. R., Schaefer III, H. F., Duke, B. J. and B. O'Leary (1995). A study of the Silagermylyne (SiGeH2) molecule: A new monobridged structure. International Journal of Quantum Chemistry, Quantum Chemistry Symposium 29: 593 - 604.
5. Stephens, J. C., Bolton, E. E.,Schaefer, H. F. III, and Andrews, L. (1997). Quantum mechanical frequencies and matrix assignments to Al2H2. Journal of Chemical Physics 107: 119 - 223.
6. Palágyi, Zoltán, Schaefer, Henry F. III, Kapuy, Ede (1993). Ga2H2: planar dibridged, vinylidene-like, monobridged and trans equilibrium geometries. Chemical Physics Letters 203: 195 - 200.
7. Thomas, R, O'Leary, P., DeLeeuw, B. J., Schaefer III, H. F., Duke, B. J., and O'Leary, B. (1993). The structurally-rich potential energy surface of the Alagallylyne (AlGaH2) molecule. Journal of Physical Chemistry 106: 7372 - 7379.
8. DeLeeuw, B. J., Grev, R. S. and Schaefer, Henry F. III (1992). A comparison and contrast of selected saturated and unsaturated hydrides of group 14 elements. Journal of Chemical Education 69: 441.
Other references Edit
• C. J. Cramer Essentials of Computational Chemistry, John Wiley & Sons (2002)
• David Young's Introduction to Computational Chemistry
External links Edit
ar:كيمياء حاسوبية ca:Química computacional cs:Výpočetní chemie de:Computerchemie es:Química computacional id:Kimia komputasihe:כימיה חישובית hu:Kémiai számítástechnikath:เคมีการคำนวณ vi:Hóa học tính toán zh:计算化学
Around Wikia's network
Random Wiki |
db3c9ded37194fe6 | About this Journal Submit a Manuscript Table of Contents
Advances in Astronomy
Volume 2008 (2008), Article ID 870804, 14 pages
Research Article
Entropy Maximization, Cutoff Distribution, and Finite Stellar Masses
Department of Physics, Banaras Hindu University, Varanasi 221 005, India
Received 18 April 2008; Revised 16 July 2008; Accepted 26 August 2008
Academic Editor: Giovanni Carraro
Conventional equilibrium statistical mechanics of open gravitational systems is known to be problematical. We first recall that spherical stars/galaxies acquire unbounded radii, become infinitely massive, and evaporate away continuously if one uses the standard Maxwellian distribution (which maximizes the usual Boltzmann-Shannon entropy and hence has a tail extending to infinity). Next, we show that these troubles disappear automatically if we employ the exact most probable distribution (which maximizes the combinatorial entropy and hence possesses a sharp cutoff tail). Finally, if astronomical observation is carried out on a large galaxy, then the Poisson equation together with thermal de Broglie wavelength provides useful information about the cutoff radius , cutoff energy , and the huge quantum number up to which the cluster exists. Thereby, a refinement over the empirical lowered isothermal King models, is achieved. Numerically, we find that the most probable distribution (MPD) prediction fits well the number density profile near the outer edge of globular clusters.
1. Introduction
It is well known that standard methods [1, 2] of equilibrium statistical mechanics run into conceptual difficulties when applied to gravitational bound systems [311]. Basically, these troubles arise due to the peculiar behaviour of the gravitational interaction (either the pair potential or the mean field) at short or long distances. The aim of the present paper is to focus attention on the triple problems, namely, unbounded radius, infinite mass, and continuous evaporation of every stellar/galactic system described by the conventional Maxwell-Boltzmann (hereinafter referred to as the MB) distribution.
Section 2 below points out that since the MB function maximizes only the simple-minded Boltzmann-Shannon entropy, its tail becomes illogical in the energy cells of small occupancy. The ensuing problems of the Maxwellian distribution cannot be really overcome by using ad hoc prescriptions such as enclosing the system in a hypothetical box [7] or modifying the Maxwellian form empirically by invoking gravitational tidal cutoff [4, 8]. Next, Section 3 presents a detailed derivation of our most probable distribution (MPD) f by taking hints from a preliminary investigation by Menon and Agrawal [12] in the molecular context and by Menon et al. [13] in the cosmological context. Such f maximizes rigorously the more sophisticated combinatorial entropy and the corresponding variational conditions dictate that f must possess a sharply truncated tail. Next, Section 4 demonstrates how our MPD idea applied to cosmology resolves the aforesaid troubles of the MB formalism, and how the Poisson equation brings additional features into our theory. We feel that the MPD philosophy may have bright applicational prospects in fitting cosmological data such as the classic study of stellar number densities in globular clusters done by King [14, Figure 2] and the important measurements performed by van Loon et al. [15, Figure 6] showing velocity distribution on the post-mail-sequence stars in Centauri. Finally, the paper ends by presenting several concluding remarks in Section 5 where some other approaches to the subject (namely, self-consistent Hartree calculations, incomplete relaxation in low-density tail, canonical ensemble treatment of virialization, occurrence of a stellar mass spectrum in real gravitating systems, etc.) are also mentioned.
Some related aspects of algebraic interest are reported in two useful appendices. Careful study of Sections 2 and 3 will reveal that quantum mechanical discretization of the single-particle levels is very convenient for setting up the combinatorial entropy and in finding the cutoff number; hence for the sake of ready reference we collect in Appendix A several known formulae concerning semiclassical one-body spectrum as well as the energy cell occupation number Also, a detailed treatment of our variational conditions in Section 4 requires that be replaced by everywhere (even for ); hence Appendix B tells why derivatives of factorials or gamma functions can be readily taken even in the cells of small occupation numbers.
2. Difficulties with the MB Distribution
2.1. Preliminaries
This section begins by quickly recalling a standard derivation of the famous Maxwell-Boltzmann distribution in equilibrium statistical mechanics. Particles are assumed to be moving in spatial dimensions at temperature under the influence of a mean field potential energy The one-body energy spectrum is divided into cells, particles are distributed at random over these, those in the th cell are regarded as mutually identical, and the simple-minded Boltzmann-Shannon entropy functional is set up. Here is the Boltzmann constant, the cell degeneracy, the total number of particles, the total energy, and and are Lagrange multipliers (see Appendix A for precise definitions of various symbols). Next, one maximizes with respect to and to arrive at the MB solution where the index j has been dropped in the quasicontinuum limit, is the ground level, and the upper end of the simple-particle energy spectrum has been extended to both for confining as well as nonconfining potentials Although (2) has been widely applied [1, 2] to gases/liquids kept in the laboratory, yet its application to open astronomical systems leads to the following serious conceptual puzzles.
2.1.1. Entropy
In the case of gravitational systems, one always looks for the local (not global) maxima of the entropy functional. The MB solution (2) does this job exactly for the Boltzmann-Shannon entropy defined by (1), but only approximately for the more sophisticated combinatorial entropy defined by (10) later. It will be shown in Section 3 that the tail of the MB solution becomes illogical in the energy cells of small occupation numbers.
2.1.2. Density
If (2) is inserted back into the general expression (A.7) of Appendix A for the mass density one obtains the famous Boltzmann barometric formula The attractive short-distance behaviour of cannot pose a real problem because the size of the quantum ground level is finite [6]. But the long-distance behaviour of is problematical as regards astrophysics in dimensions. Indeed, for a dilute gaseous star [2, page 114] without the Poisson equation constraint, one finds asymptotically Also, for the isothermal Emden sphere [3, 8] subject to the Poisson equation constraint, one knows that with a being the isothermal length scale. Clearly, as the nil/slow decrease of in (4) and (5) and the logarithmic increase of W in (5) are unphysical.
2.1.3. Radius
From the MB density (3), one computes the mean size of the system via whichdiverges both for gaseous stars (4) and Emden spheres (5).
2.1.4. Mass
The total mass of the MB system is calculated from which also diverges for the two cases mentioned above. Thus, in the Boltzmann-Shannon view, the most likely state of an isotropic stellar system has infinite mass.
2.1.5. Evaporation
Since all regions of the phase space up to are allowed an open MB system, for example, the dilute gaseous star goes on evaporating with time, producingas a thereby a net outgoing flux of particles [2, page 114] at every positive thermodynamic temperature T: Of course, the isothermal sphere can be stable against evaporation [9] but its mean field growing like up to is unphysical.
King-like Lowered Isothermal Models
In the conventional literature, the above difficulties are usually circumvented by enclosing the system within a hypothetical box of some radius [7], or by modifying the original distribution heuristically into non-Maxwellian form such as and so forth, holding in the range and vanishing elsewhere [4, 8, 1619]. In particular, King [4] and Wilson [19] appealed to the tidal force field of the galaxy for physically setting its outer boundary and assumed the velocity distribution of stars to be cut off at the local escape velocity. Lowered isothermal prescriptions such as (9) are often employed by astronomers to fit data.
Physical Motivation for
If one takes a stellar cluster in an original Liouville collisionless state then the cluster will start evolving in space-time through trajectory mixing and stellar encounters which are most frequent in the core region. Mathematically, the complicated dynamics of such a nonequilibrium system is governed by the coupled Fokker-Planck and Poisson equations [17]. Physically, this evolution will involve momentum/mass/heat flow, tide generation, and entropy production. At equilibrium, the macroscopic flows will stop, tides will stabilize, and the entropy would become maximum. Naturally, von Hoerner [20] and King [14] realized that a finite boundary to the star cluster is set up by the tidal force of the galaxy, that is, the cutoff tail in the essentially classical stellar systems can be ascribed to the physical outcome of the boundary conditions and/or constraints (independent of the Plank constant).
3. Our Most Probable Distribution (MPD)
3.1. Preliminaries
We adopt the view that the above-mentioned King models can be refined further by utilizing the following facts. (i) At equilibrium, the entropy of a multiparticle thermal system should become a (local) maximum. Of course, the Boltzmann-Shannon definition of in (1) will not serve the purpose due to the difficulties of the Maxwellian; we shall show in (11) and (12) below that a more suitable candidate is the so-called combinatorial entropy S that counts the number of microstates in energy cells corresponding to specified total particle number N and total energy E. (ii) The resulting most probable distribution f should develop a tail which is automatically truncated at a finite energy This is because a star moving in the mean potential field will have a farthest turning at distance satisfying where may now be identified with the classical King radius of the galaxy. (iii) By Bohr’s correspondence principle, classical motion is the limiting case of quantum motion in states of very large quantumnumbers. The cutoff quantum number K and cutoff energy should be determinable from the variational constraint equations of our MPD theory provided that h is brought into the picture explicitly. (iv) Our MPD solution for f should be able to provide a theoretical justification (or better characterization) of the lowered isothermal Maxwellian models (9). Now we shall demonstrate how such a task is accomplished in practice.
Gibbs Combinatorial Entropy
We follow the basic theme of Huang [1, page 182], and a preliminary investigation by Menon and Agrawal [12] as well as by Menon et al. [13]. The single-particle spectrum is divided into J cells into which the particles are distributed at random such that the th cell has central energy width degeneracy occupation number and occupation probability per state defined by Appendix A. Next, treating the particle in the th cell as indistinguishable, a Gibbs combinatorial entropy functional S is constructed via
Gamma Function Form
We deliberately rewrite (10) in the equivalent form The replacement of factorials by gammas has several algebraic advantages. (i) The equality is exact at the integer values (ii) The asymptotic behaviour, namely, of both and are the same as (iii) Hence, by a theorem due to Carlson [21, 22], provides the most economical, essentially unique continuation of to all continuous values throughout the range (iv) While setting up the variational conditions, later we shall need to replace the derivative evaluated at integer values by the digamma function [23] computed at general continuous values. This problem of integer programming is handled in Appendix B by using an efficient finite-difference package for all natural numbers up to 4. (v) Appendix B also shows that the numerical differentiation of can be readily done even at small values and so forth, giving results in good agreement with
Exact Variational Conditions
Next, we consider the following objective functional to be maximized: where and are unknown Lagrange multipliers. Equating to zero, the partial derivatives and lead to the following set of exact variational conditions still using the discrete index [12]:
Without making any assumption concerningthe largeness or smallness of we can rewrite (13a) in the compact form where the symbol was already encountered earlier in (2). In principle, (14a) can be solved for the desired cell occupation numbers in terms of Thereafter, the Lagrange multipliers and can be determined from the constraints (13b). Equivalently, the chemical potential and thermodynamic temperature T may be introduced via Finally, if the total number J of levels is very large, we are permitted to take the quasilimit (A.4) leading to a continuous distribution for versus by dropping the index j and converting sums into integrals. Let us derive several interesting properties of our most probable distribution (MPD) defined by (13a), (13b) and (14a), (14b) with the suffix j omitted.
Location of The Peak
Differentiating (14a) with respect to we get Clearly, the MPD occupancy and MB occupancy are both peaked at a common energy which satisfies where the dot stands for derivative with respect to Typical algebraic estimates of for the soluble potential models will be reported later in (28).
Large Region
In the so-called head region of the continuous distribution, the cells have large occupancy so that the Stirling’s approximation holds in the fundamental equation (14a). Hence the MB solution is roughly retrieved, namely,but it must be violated in the cells where the occupancy becomes comparable to, or less than, unity.
The Tail Region
On the other extreme lies the tail region of the continuous distribution where the cell occupancy becomes small, that is, Then the digamma function possesses a Taylor expansion where is the negative of Euler’s constant and is a Riemann zeta value. Substitution of the expansion (18) into the fundamental equation (13a) leads to the following three surprising yet important observations.
(i)The tail of the distribution intersects the energy axis at a cutoff point such that where the suffix K refers to energy (ii) The said intersection happens linearly because, in its neighbourhood, the occupancy where stands for evaluated at (iii)Extension of the graph of versus beyond the cutoff point is not allowed because that would tend to make negative in (13a), that is,implying that the original occupied spectrum (A.2) has shrunk below J or due to strict entropy maximization under stable equilibrium. (The possibility would correspond to unstable equilibrium, that is, continuous evaporation of the system.) Schematic plots of and versus are shown in Figure 1. Typical algebraic estimates of the cutoff energy and quantum number K for the solvable models will be reported later in (28).
Figure 1: Schematic plots (not to scale) of the distribution functions in the MB approach (cf. (2)) and MPD theory (cf. (23)). (Plot of the occupancy function would have shown a peak in both cases.)
Compact Solution for
Our equation (14a) is a transcendental equation in and its precise analytical solution in closed form is not known. Fortunately, there exists an ansatz which works excellently throughout as shown graphically in Figure 2. Combining (14a) and (22), we obtain a very compact, quite accurate, MPD solution valid in all energy cells of relevance as It is interesting to note that if were replaced by a constant in (23), our MPD solution would agree with the first line of (9) implying a sort of justification for the lowered isothermal Maxwellian models. Actually, our numerically accurate solution (23) should be regarded as a better characterization since the degeneracy function is strongly energy-dependent.
Figure 2: The exact digamma function and three numerical approximations to thesame used in the context of (22), (17), and (18). The values of three important constants are and
Compact Number Condition
Combining the number constraint (13b) with the general solution (23), we can define an effective number through Here we have employed the quasicontinuum limit (A.4) and introduced This gives a formal expression for the Lagrange multiplier (or reduced chemical potential) provided that the underlying mean field W or its reduced degeneracy function is known.
Compact Cutoff Condition
Lastly, we convert the cutoff criterion (19) into Eliminating with the help of (24b), we find which yields a formal expression for the cutoff energy whose functional dependence can be inverted to specify also the number K of levels. The sharply cutoff tail of (21) will play a crucial role in the cosmological application to be discussed later in Section 4.
Illustration for the (Truncated) Oscillator Well
The above methodology may be illustrated in the case of the truncated harmonic oscillator potential listed in Table 1: Before going ahead with the algebra, the following important remarks should be kept in mind. (a) If the well was untruncated, that is, the step function in (26) was absent then all particles would remain truly confined, the Boltzmann mass density (3) would vanish asymptotically, and the MB distribution would not be problematical. (b) However, if the well is truncated by the use of the step function in (26), then the potential vanishes for particles can be ejected into the continuum, the MB distribution becomes problematical, and the MPD philosophy becomes very useful. (c) Near the origin the oscillator potential is rather flat, that is, smoothly varying so that it can approximately mimic the realistic mean field in the core region of astronomical galaxies. In sharp contrast, the truncated-linear and Coulomb-like potentials of Table 1 cannot do so since these vary rather quickly as (d) As they stand, the depth and range are only illustrative parameters introduced in (26). However, when we come to cosmological applications in Section 4 (especially the Poisson equation), it will be found that these parameters are directly related to the physical mass M and observed radius of the galaxy. We are now ready to apply the MPD program to (26).
Table 1: Properties of one-body semiclassical spectrum labeled by the integer j for four solvable potentials in D dimensions. The well-depth and the range R of the rectangular, linear, and oscillator wells are made finite using the unit step function. The Coulomb well is left unrestricted over all r. For other notations, see Appendix A. (We do not tabulate the rigid box potential model which has the upper end of the energy spectrum at Such a model is of little interest in cosmology In usual physical applications, one puts )
The Tilde Nomenclature
First, we read off the symbols and from the fourth column of Table 1. Next, for algebraic convenience, the following dimensionless quantities are defined along with the thermal de Broglie wavelength : A few remarks are in order concerning these definitions. The Planck constant or thermal de Broglie wavelength has appeared in the value of the symbol and the inequality is essential for the validity of classical motion (cf. (A.8)). The function measures the single-particle energy from the ground level in terms of The symbols and may be called the dimensionless chemical potential and dimensionless cutoff energy, respectively, whose fixation using MPD constraints is yet to be done. The integral will play a crucial role below with being the incomplete gamma function.
Use of MPD Conditions
Remembering the tilde quantities, we can readily evaluate the conditions (16), (24b), and (25b). This yields the peak location peak height dimensionless chemical potential and dimensionless cutoff energy through where the wavy symbol ~ implies the order of magnitude, and the multiplicative factors of order unity have been suppressed. We still have to show that the formal equations (28) do admit valid, that is, self-consistent MPD solutions under suitable restrictions. For this purpose, we consider below two cases in which the parameter has markedly different behaviours.
Case 1 (well depth large compared to times temperature). For the truncated oscillator potential (26), we recall the tilde notations (27) and impose the following inequalities: The physical meaning of these restrictions is as follows. The inequalities guarantee the validity of classical dynamics in states of large quantum numbers, the condition ensures that the th level lies below the ionization threshold for stable MPD, the assumption in Case 1 implies that the actual cutoff energy is several above the ground level, means that the well depth is large compared to the condition implies that the effective number of particles grossly counted per cell is much more than unity, and means that the system is dilute or nondegenerate (because the packing fraction that is, the average number of particles contained inside a -dimensional sphere of radius is small compared to unity). Then the incomplete gamma function and consistent handling of (28) leads to the estimates The present case should apply to usual gases/liquids contained in the laboratory and we have independently verified that the functional forms of (29) and (30) are very rugged, that is, they hold for all the soluble models reported in Table 1.
Case 2 (well depth comparable to times temperature). Again we recall the tilde notations (27) and impose the orders of magnitude Then the incomplete gamma function and (28) are found to admit the self-consistent estimates The physics of (31) and (32) is as follows. The statement applies to gravitational systems obeying virialization, tells that the total number of particles is of the same order as the number of MPD cells, and signifies that the cell occupancies have become comparable to unity with again playing a role through the symbol The present case should correspond to open astronomical systems and the ruggedness of the results (32) can be verified also for the other solvable models in Table 1.
4. Conceptual Application of MPD to Cosmology
We are now ready to resolve the conceptual difficulties of the MB distribution mentioned already in Section 2 by employing the MPD solution obtained in Section 3.
4.1. Entropy
The Boltzmann-Shannon entropy of (1) is simple-minded, its maximization leads to the MB solution in (2) with untruncated tail, and its generalization to quantum statistics is difficult. In sharp contrast, the combinatorial entropy S of (11) is sophisticated, its maximization leads to our MPD solution in (23) with a truncated tail, and its generalization to quantum statistics as straightforward.
4.2. Density
If the MPD information (21) is inserted back into the general expression (A.7) for the local mass density based on the transformation we obtain where surprisingly vanishes if W(r) equals This is explained by remembering that since no particle in MPD is allowed to have an energy more than there exists a largest classical turning point at beyond which the density must become zero identically, that is, in sharp contrast to the MB density profiles (3)–(5). We can also find the rate at which approaches zero as tends to However, (20) has already told us that in the tail region. Hence, (33) yields the leading behaviour Since in a “good” MPD solution and are finite, our result (35) tells that the mass density obeys a law near the edge of the system.
4.3. Radius
Clearly, the distance in (34) is the upper bound on the size of our galactic system and, for binding, we must have with being the turning point just before the continuum starts. (In the soluble models of Table 1, this was called ). Since the density vanishes beyond the MPD integral defining the average size will also converge, that is, in sharp contrast to the MB mean radius (6).
4.4. Mass
By the same token, the MPD integral defining the total mass of the stellar system also exists, that is, in sharp contrast to the MB mass (7).
4.5. Nonevaporation
As is well known if an attractive mean field vanishes asymptotically, then the energy is called the ionization threshold. Hence, our galaxy will be stable against evaporation if the MPD cutoff energy happens to be negative at the given thermodynamic temperature Consequently, for there is no net outgoing particle flux, that is, in sharp contrast to the MB result (8).
Of course, a galaxy which is observed experimentally to evaporate is not in true equilibrium. Then simplifying restrictions like (31) may not hold, that is, the cutoff conditions (19) and (32) will admit a positive root for
Poisson Equation Implications
So far in our treatment, the detailed algebraic form of the mean field W(r) was not required explicitly for self-gravitating systems. Actually this is a tough problem theoretically/numerically because one must solve the coupled equations for the distribution function f and mass density in accordance with the Poisson equation in 3 dimensions where is the gravitational constant. Our limited aim in the present paper will, however, be served by noting the following features.
4.6. Features
(a) Since the density is sharply cutoff at by Gauss theorem, the exact potential energy and force at exterior points become
(b) At the edge itself, the potential energy becomes equal to the cutoff energy, namely,
(c) In the interior region, the mean field may get smoothened so as to yield a finite depth by virtue of the gravitational virial theorem.
(d) At interior points, the exact profile of the mean field is not known a priori since it has to be, in general, computed numerically by solving (39). However, for the purposes of illustration, we can represent it by an oscillator form if with unknown phenomenological constants and R. The corresponding interior potential energy and force at the system edge then become Matching these to the exterior values given by (40) at we identify Thus, is deeper than and R is larger than (although orders of magnitude are the same).
Suggested Procedure for Cosmologists
Suppose a practical astronomical observation has been made on a cluster of stars. For utilizing our MPD theory with respect to his collected data, the cosmologist should proceed through the following steps.
Step1 (characterization parameters). From the observed size and the known mass M of the cluster, the MPD cutoff energy is immediately given by (41) as Next, the oscillator well-depth and the range parameter R for motion inside the cluster are set up from (44) as and Next, according to (32) applicable to cosmology, the MPD parameters have the rough orders of magnitude upon using the value of given by the first line of (27) under virialization. Next, the cosmologist may treat (45) as providing a new mass versus radius relationship for nondegenerate clusters whose experimental status is, however, not yet studied. Finally, for an accurate interlink among all MPD parameters, the astronomer may like to solve the transcendental equations (27) numerically.
Step2 (MPD density near the edge). Next, the astronomer may look at (35) which gives the leading behaviour of the stellar number density near the cluster’s boundary: since the mean field W(r) becomes Newtonian near the periphery. This can be cast into more convenient form by defining the variable choosing a normalization point and working with the modified function which becomes unity at but vanishes at To test the validity of (47), the astronomer may, for example, concentrate on the star counts made on photographs of the cluster M 15 (see Figure 2 of King [14]) taken with the 48-inch Schimdt camera in the Palomer observatory. The results of are plotted in Figure 3. Clearly, there is quite good agreement between experimental observation and MPD prediction, although a slight curvature in the data trend may imply the presence of additional weak nonlinear terms on the RHS of (47).
Figure 3: Normalized stellar number density raised to 2/5 power, that is, near the edge of the cluster M 15. The experimental data points are adapted from King [14, Figure 2]. The theoretical MPD prediction is computed from (47) with
Step3 (comparison with King density). Next, it is worthwhile to consider the function and expand its MPD expression (47) around the matching point binomially in the form Dropping the term, the cosmologist retrieves the famous formula proposed empirically by King, namely, whose square gives the King’s profile [14, equation (2)] near the cluster’s periphery as It is well known that the phenomenological proposal (49) has been extensively used in the past by astronomers. For example, in context of M15 cluster Figure 4 shows the plot of near the cluster’s boundary. Clearly, the agreement between experimental observation and King’s parametrization is good, ignoring the slight curvature in the data trend. Incidentally, the qualities of fit seen in Figures 3 and 4 are quite comparable implying that, with the present accuracy of measuring it is not possible to say whether MPD formula (47) or King recipe (49) is superior.
Figure 4: Normalized stellar number density raised to power, that is, near the boundary of the M 15 cluster. The experimental data points are read from King [14, Figure 2]. His model fit is given by (49) with
Step 4 (complete density profile). Finally, the astronomer may like to have an expression for the number density valid throughout the range In principle, our MPD distribution function f given by (23) yields the formal expression with the mean potential being approximately harmonic oscillator in the interior and Newtonian near the edge. Unfortunately, analytical evaluation of the phase space integral (51) is somewhat tedious and will be dealt with in a future communication. However, the cosmologist should note that the integral (51) is the algebraic difference of two terms which is very satisfying because the empirical full density profile written by King [14, equation (14)] also contains a difference of two terms.
Step 5 (velocity distribution of stars). It is a standard astronomical practice to measure the local radial velocity distributions (along with other properties) of stars in a globular cluster, for example, see the extensive photometric study made by van Loon et al. [15] on the post-main-sequence stars in Centauri (NGC 5139). The cosmologist may ask how well our MPD distribution function f given by (23) fits the observed data. Unfortunately, a straightforward answer to this question is difficult because exact values of the unknown parameters and K must be obtained numerically from the transcendental conditions (24b) and (25b). We plan to accomplish this task in a future communication.
5. Concluding Remarks
The main results of the present work appear in the abstract along with Sections 24 and are often emphasized by italics. It is hoped that astronomers will benefit from the algebraic properties of MPD derived in Section 3, its cosmological implications mentioned in Section 4, numerical plots of number density profiles in Figures 3 and 4, and a pointwise comparison between the King model and MPD philosophy made in Table 2. Clearly, both types of theories can be applied to cosmology although our f may be regarded as providing a better characterization from the conceptual viewpoint.
Table 2: Salient features of King-like lowered isothermal models (cf. (9)) and our MPD distribution f (cf. (23)).
The essence of our cosmological discussion in Section 4 is the following. Suppose that an astronomer makes observations on a (quantum mechanically nondegenerate) cluster having N stars, total mass M, and radius Then, its MPD solution will be characterized by the cutoff energy and cutoff quantum number Before ending the paper, we mention below briefly several important points not discussed explicitly in the earlier sections.
(i) In the mean field description of a multiparticle system, fluctuations arising from short-range pair correlations are usually ignored. The effect of fluctuations is likely to be stronger on the MB solution whose tail extends to in (2). Such effect is likely to be weak on the MPD solution f whose tail gets truncated at in (23).
(ii) One may argue that a sharp radius is also known to arise in the method of self-consistent Hartree fields applied to gravitational systems [17]. We stress, however, that the Hartree method is done through a numerical algorithm because the coupled equation for the mean field and distribution function must be solved iteratively on a computer. Therefore, our analytical maximum-entropy treatment of Section 3 still retains its novelty.
(iii) One may also argue that it is not meaningful to demand thermodynamic equilibration in the peripheral region of the galaxy because, due to low densities, relaxation may remain incomplete there. However, it must be kept in mind that since gravitational forces are of long range, the mechanism of collisionless relaxation [9] still operates. Therefore, our assumption of equilibration even in the tail region may remain justified.
(iv) Next, mention must be made of some recent investigations [10, 11] carried out on the question of gravitational galactic clustering, their virialization, and peculiar velocity distribution superposed over the local Hubble flow. These authors start from the N body cosmological canonical partition function in a box of large volume perform the individual momentum integrals at the outset over the infinite domain write the entropy S as the logarithm of a Gibbs integral over the density of states, and minimize the Helmholtz free energy with respect to the internal energy E. Of course, these investigations are very different from our work because we do not need an enclosing box, momentum integrations over infinite domain are never performed, the entropy functional is combinatorial, and maximization is done with respect to the cell occupation numbers.
(v) Next, suppose that one considers a time span long compared to the two-body relaxation time in a globular stellar cluster. One may argue that a star having energy (i.e., arbitrarily close to zero but still negative) will go far away and yet come back. Since the corresponding turning point may be arbitrarily big, one expects a very small (but not zero) possibility of the star’s existence even at a very large radius. This logic apparently contradicts the MPD result (34) which had claimed that there is no density outside a finite distance
Actually, the above logic has the following very subtle fault. While doing pure dynamics, it is enough to find trajectories and their turning points but while doing statistical mechanics, it is essential also to calculate the density profile and the related total mass Now, in direct analogy with (35) but with the density profile at large distance and its associated Poisson equation become (in dimensions) This result is physically unacceptable because the gravitational potential due to a finite mass object must fall asymptotically like Hence a logic based on will not work. In sharp contrast, if the globular cluster has finite experimental mass then it can be easily described by our MPD solution (34) characterized by bounded and finite
(vi) Finally, a cosmologist may argue that since real gravitating systems have a mass spectrum of stars, the assumption of particles with the same mass m in MPD may not be justified. We wish to point that some workers have attempted to apply hydrodynamical equations to globular clusters employing a phase space density involving the continuous mass [24] as an extra variable. Some other workers have analyzed phenomenologically the mergence of clusters such as Praesepe [25] employing four mass bins. Although, in principle, a multicomponent combinatorial entropy will now replace (10), yet the corresponding variational conditions (13a) and (13b) will be hard to handle analytically because different chemical potentials and different cutoff energies may have to be assigned to various components present in the system. An easy approximation will be to still use the MPD formalism of Section 3 based on the single particle average mass where the suffix runs over different species and there are particles of the th type. This prescription should be reasonable for those clusters where the mass dispersion is small (in units of the solar mass).
A. One-Body Description Recapitulated
A.1. Preliminaries
This section will summarize our notations along with several known formulae dealing with the semiclassical single-particle spectrum/distribution without invoking entropy constraints. Some of these formulae will be used explicitly in Sections 35 of the text.
A.2. Assumptions and Notations
Consider the nonrelativistic localized motion of a particle in spatial dimensions under the influence of a smooth attractive central field. Classically, the symbols respectively, denote the mass, distance, absolute momentum, potential energy, applied force, and mechanical energy of this particle. Quantum mechanically invokes the Planck constant and solves the Schrödinger equation for determining the energy spectrum where is the ground level and the highest bound level supported. Of course, solution of the Schrödinger equation for the exact eigenvalues, eigenfunctions, and their degeneracy is generally tedious.
A.3. Sommerfeld Quantization
Perhaps the easiest semiclassical link between the descriptions (A.1) and (A.2) is provided by Sommerfeld’s criterion [26] which says that the phase integral or action variable over a complete oscillation should be an integer multiple of Then a discrete level in (A.2) corresponds to the classical turning point local momentum variable principal quantum number and level spacing given by Since the presence of zero point energy is of little consequence here, hence the more sophisticated WKB quantization [27] will not be needed for our purpose. Also, if the number J of supported levels is very large compared to unity, then the quasicontinuum limit can be taken by writing
Gibbs’ Prescription
Further information is obtained by imagining a spherical region of range R and remembering that several quantum states of different orbital angular momenta and magnetic projections may be nearly degenerate at a given energy level. Then, the D-dimensional solid angle total volume V of the region, useful geometrical factor A, Gibbs’ phase space element accumulated number of quantum states below local number of states per unit energy interval, and the degeneracy of the th level itself are read off from Here is the single-particle Hamiltonian, the quasicontinuum limit (A.4) is understood, is the gamma function, step the unit step function, and the delta function.
Solvable Potential Models
The methodology described by (A.3)–(A.5) is best illustrated in the case of 4 soluble models, namely, the rectangular, truncated linear, truncated harmonic oscillator, and Coulomb wells. The results are summarized in Table 1 and the following features are worth noticing.
(i) In the case of the rectangular, linear, and oscillator wells, the range R represents the distance beyond which the particle goes into the continuum. By the same token, the highest level J is fixed through the requirement that (ii) For the Coulomb well, however, since the bound orbits can have any size, one sets By the same token, the ionization threshold appears at (iii) In every model of Table 1, the semiclassical energy ε increases monotonically with the principal quantum number j, but the trend of the level spacing Δ is not uniform.(iv) In every soluble model, the level degeneracy where the geometrical factor is of order unity. Hence it is reasonable to expect that, for a more general attractive central field in dimensions, at least for large (v) Table 1 does not explicitly treat the infinite rectangular well, that is, rigid box in which particles of any momentum would remain confined. Then, the highest kinematically allowed level would have using the many-body notation of (A.6) below. Of course, the rigid box model is irrelevant in cosmology.
Multiparticle, Statistical System
In the present paper, we shall not consider pure Bose/Fermi many-body systems where the strict quantum mechanical identity of all particles is crucial. Ours is the so-called Boltzmann system where the one-body spectrum is obtained from the Schrödinger equation/semiclassical quantization but strict identity among all particles is not imposed except within the same energy cell. The mean field of (A.1) may be either externally applied or internally generated. Assuming spherical symmetry, independent-particle motion, and ignoring short-range pair correlations, we let the symbols respectively, denote the specified number of particles, total energy, global average number density, global mean thermodynamic temperature, Boltzmann constant, inverse temperature parameter, and thermal de Broglie wavelength. The one-body phase space may be imagined to be composed of the differential elements (cf. (A.5)) or of J energy cells of successive widths which are arranged in the sequence (A.2). Then a useful transformation single-particle energy one-body distribution function cell occupancy local number density local mass density total number N, and total energy E are read off from Two crucial comments are in order at this stage. (i) The functional form of is left unspecified at the moment. (ii) Convincing justifications are still needed for retaining in our mechanical as well as statistical expressions (A.3)–(A.7) especially when application to classical galaxies of enormous sizes is being envisaged.
Importance of Planck Constant
(a) By Bohr’s correspondence principle, the motion of a quantum Schrödinger/Sommerfeld particle tends to become classical in the states of large principal quantum numbers. In the notation of (A.4), this requires where J of Table 1 contains explicitly. (b) Strict Bose/Fermi statistical systems tend to obey classical statistics at low density and high temperature if in the notation of (A.6). This requires that the linear size of the system be large compared to the thermal de Broglie wavelength, that is, Hence the -dependent dual inequalitiestell very precisely when a multiparticle system can be called “classical.” Such a precision would be lacking if were dropped at the outset in cosmological applications. (c) While the Sommerfeld quantum number j in (A.3) is very suitable for labelling the distinct energy levels, the Gibbs degeneracy g (derived from the phase-space element ) in Table 1 is equally convenient to count the precise number of states in any cell. (d) The precise knowledge of a cutoff quantum number and energy will be shown to be crucial to find the rigorous most probable distribution in Section 3 which job cannot be done in the cosmological context of Section 4 if is dropped at the outset (in the classical phase space element ).
B. Extension from Integer to Continuous
In this appendix, we carefully examine the numerical justification of some algebraic manipulations done on the combinatorial entropy S of (10), (11).
Factorials versus Gammas
As is well known, identically equals at all nonnegative integers as seen from the second line of the following brief table. Its third line records the corresponding values of the natural logarithm to be used as the input in Table 3.
Table 3
B.1. Numerical Differentiation
Next, we address the subtle question of computing where the suffix “num” stands for “numerically” and the inequality implies that has become a continuous variable over a test range [0, 4]. This is a problem of integer programming and we tackle it by adopting the following procedure.
(i) First, a finite-difference table was prepared using the above-mentioned data on (ii) Next, at several chosen integral/fractional values of (B.1) was computed employing an efficient package based on Markoff’s version of Newton’s interpolation differentiated [23, page 883]. (iii) Finally, comparison was made with the standard values of the digamma function [23, pages 258, 267, 272] obtained from the “exact” definition
B.2. Results
The accompanying table shows that of (B.1) and of (B.2) agree within 1% to 5% at the input integer values We also see that their mutual agreement is good at the small fractional values of Therefore, taking derivatives of at all continuous values of (including ) is mathematically justified in (13a) and (14a) (see Table 4).
Table 4
The authors thank the Council of Scientific and Industrial Research (CSIR), New Delhi, India, for the financial support.
1. K. Huang, Ed., Statistical Mechanics, John Wiley & Sons, New York, NY, USA, 2nd edition, 1987.
2. L. D. Landau, E. M. Lifshitz, and L. P. Pitaevski, Statistical Physics, Part 1, Pergamon, Oxford, UK, 1980.
3. S. Chandrasekhar, Principles of Stellar Dynamics, Dover, New York, NY, USA, 1960.
4. I. R. King, “The structure of star clusters. II. Steady-state velocity distributions,” The Astronomical Journal, vol. 70, p. 376, 1965. View at Google Scholar
5. J.-M. Lévy-Leblond, “Nonsaturation of gravitational forces,” Journal of Mathematical Physics, vol. 10, no. 5, pp. 806–812, 1969. View at Publisher · View at Google Scholar
6. E. H. Lieb, “The stability of matter,” Reviews of Modern Physics, vol. 48, no. 4, pp. 553–569, 1976. View at Publisher · View at Google Scholar
7. D . Lynden-Bell and R. M. Lynden-Bell, “On the negative specific heat paradox,” Monthly Notices of the Royal Astronomical Society, vol. 181, pp. 405–419, 1977. View at Google Scholar
8. J. J. Binney and S. D. Tremaine, Galactic Dynamics, Princeton University Press, Princeton, NJ, USA, 1987.
9. T. Padmanabhan, “Statistical mechanics of gravitating systems,” Physics Report, vol. 188, no. 5, pp. 285–362, 1990. View at Publisher · View at Google Scholar
10. F. Ahmad, W. C. Saslaw, and N. I. Bhat, “Statistical mechanics of the cosmological many-body problem,” The Astrophysical Journal, vol. 571, no. 2, pp. 576–584, 2002. View at Publisher · View at Google Scholar
11. B. Leong and W. C. Saslaw, “Gravitational binding, virialization, and the peculiar velocity distribution of the galaxies,” The Astrophysical Journal, vol. 608, no. 2, pp. 636–646, 2004. View at Publisher · View at Google Scholar
12. V. J. Menon and D. C. Agrawal, “Method of most probable distribution: new solutions and results,” PRAMANA: Journal of Physics, vol. 33, no. 4, pp. 455–465, 1989. View at Publisher · View at Google Scholar
13. V. J. Menon, R. K. Dubey, and D. N. Tripathi, “Stable galaxies of finite masses in the most probable distribution,” Physica A, vol. 367, pp. 269–275, 2006. View at Publisher · View at Google Scholar
14. I. R. King, “The structure of star clusters. I. An empirical density law,” The Astronomical Journal, vol. 67, no. 8, pp. 471–485, 1962. View at Publisher · View at Google Scholar
15. J. Th. van Loon, F. Van Leeuwen, B. Smalley, et al., “A spectral atlas of post-main-sequence stars in ω Centauri: kinematics, evolution, enrichment and interstellar medium,” Monthly Notices of the Royal Astronomical Society, vol. 382, no. 3, pp. 1353–1374, 2007. View at Publisher · View at Google Scholar
16. R. V. D. R. Wooley and D. A. Robertson, “Studies in the equilibrium of globular clusters (II),” Monthly Notices of the Royal Astronomical Society, vol. 116, p. 288, 1956. View at Google Scholar
17. L. Spitzer Jr. and R. Harm, “Evaporation of stars from isolated clusters,” The Astrophysical Journal, vol. 127, pp. 544–550, 1958. View at Publisher · View at Google Scholar
18. R. W. Michie, “On the distribution of high energy stars in spherical stellar systems,” Monthly Notices of the Royal Astronomical Society, vol. 125, p. 127, 1963. View at Google Scholar
19. C. P. Wilson, “Dynamical models of elliptical galaxies,” The Astronomical Journal, vol. 80, pp. 175–187, 1975. View at Publisher · View at Google Scholar
20. S. von Hoerner, “Internal structure of globular clusters,” The Astrophysical Journal, vol. 125, p. 451, 1957. View at Publisher · View at Google Scholar
21. V. De Alfero and T. Regge, Potential Scattering, North-Holland, Amsterdam, The Netherlands, 1965.
22. R. J. Newton, The Complex j-Plane, Benjamin, New York, NY, USA, 1964.
23. M. Abramovitz and I. A. Stegun, Eds., Handbook of Mathematical Functions, Dover, New York, NY, USA, 1972.
24. S. Ninkovic, “A globular-cluster model with variable mean mass of a single star,” Bulletin de l'Observatoire Astronomique de Belgrade, no. 154, pp. 9–12, 1996. View at Google Scholar
25. K. Holland, R. F. Jameson, S. Hodgkin, M. B. Davies, and D. Pinfield, “Praesepe—two merging clusters?” Monthly Notices of the Royal Astronomical Society, vol. 319, no. 3, pp. 956–962, 2000. View at Publisher · View at Google Scholar
26. L. Pauling and E. B. Wilson, Introduction to Quantum Mechanics, McGraw-Hill, New York, NY, USA, 1935.
27. L. I. Schiff, Quantum Mechanics, McGraw-Hill, New York, NY, USA, 1968. |
ac4013383985e90e | Walter H. Schottky
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Walter H. Schottky
Walter Hermann Schottky (1886-1976).jpg
Born 23 July 1886 (1886-07-23)
Zürich, Switzerland
Died 4 March 1976 (1976-03-05)
Pretzfeld, West Germany
Residence Germany
Nationality German
Fields Physicist
Institutions University of Jena
University of Würzburg
University of Rostock
Siemens Research Laboratories
Alma mater University of Berlin
Doctoral advisor Max Planck
Heinrich Rubens
Notable students Werner Hartmann
Known for Schottky effect
Schottky barrier
Schottky contact
Schottky anomaly
Screen-grid vacuum tube
Ribbon microphone
Ribbon loudspeaker
Theory of Field emission
Shot noise
Notable awards Hughes medal (1936)
Werner von Siemens Ring (1964)
Walter Hermann Schottky (23 July 1886 – 4 March 1976) was a German physicist who played a major early role in developing the theory of electron and ion emission phenomena,[1] invented the screen-grid vacuum tube in 1915 and the pentode[citation needed] in 1919 while working at Siemens, co-invented the ribbon microphone and ribbon loudspeaker along with Dr. Erwin Gerlach in 1924[2] and later made many significant contributions in the areas of semiconductor devices, technical physics and technology.
Early life[edit]
Schottky's father was mathematician Friedrich Hermann Schottky (1851–1935). Schottky had one sister and one brother. His father was appointed professor of mathematics at the University of Zurich in 1882, and Schottky was born four years later. The family then moved back to Germany in 1892, where his father took up an appointment at the University of Marburg.[citation needed]
Schottky graduated from the Steglitz Gymnasium in Berlin in 1904. He completed his B.S. degree in physics, at the University of Berlin in 1908, and he completed his Ph.D. in physics at the Humboldt University of Berlin in 1912, studying under Max Planck and Heinrich Rubens, with a thesis entitled: Zur relativtheoretischen Energetik und Dynamik.
Schottky's postdoctoral period was spent at University of Jena (1912–14). He then lectured at the University of Würzburg (1919–23). He became a professor of theoretical physics at the University of Rostock (1923–27). For two considerable periods of time, Schottky worked at the Siemens Research laboratories (1914–19 and 1927–58).
In 1924, Schottky co-invented the ribbon microphone along with Erwin Gerlach. The idea was that a very fine ribbon suspended in a magnetic field could generate electric signals. This led also to the invention of the ribbon loudspeaker by using it in the reverse order, but it was not practical until high flux permanent magnets became available in the late 1930s.[2]
Major scientific achievements[edit]
Possibly, in retrospect, Schottky's most important scientific achievement was to develop (in 1914) the well-known classical formula, now written
E_{int}(x) = -\frac{q^2} {16\pi\epsilon_0{x}}
Which computes the interaction energy between a point charge q and a flat metal surface, when the charge is at a distance x from the surface. Owing to the method of its derivation, this interaction is called the "image potential energy" (image PE). Schottky based his work on earlier work by Lord Kelvin relating to the image PE for a sphere. Schottky's image PE has become a standard component in simple models of the barrier to motion, M(x), experienced by an electron on approaching a metal surface or a metal–semiconductor interface from the inside. (This M(x) is the quantity that appears when the one-dimensional, one-particle, Schrödinger equation is written in the form
Here, \hbar is Planck's constant divided by 2π, and m is the electron mass.)
The image PE is usually combined with terms relating to an applied electric field F and to the height h (in the absence of any field) of the barrier. This leads to the following expression for the dependence of the barrier energy on distance x, measured from the "electrical surface" of the metal, into the vacuum or into the semiconductor:
M(x) = \; h -eFx - e^2/4 \pi \epsilon_0 \epsilon_r x \;.
Here, e is the elementary positive charge, ε0 is the electric constant and εr is the relative permittivity of the second medium (=1 for vacuum). In the case of a metal–semiconductor junction, this is called a Schottky barrier; in the case of the metal-vacuum interface, this is sometimes called a Schottky–Nordheim barrier. In many contexts, h has to be taken equal to the local work function φ.
This Schottky–Nordheim barrier (SN barrier) has played an important role in the theories of thermionic emission and of field electron emission. Applying the field causes lowering of the barrier, and thus enhances the emission current in thermionic emission. This is called the "Schottky effect", and the resulting emission regime is called "Schottky emission".
In 1923 Schottky suggested (incorrectly) that the experimental phenomenon then called autoelectronic emission and now called field electron emission resulted when the barrier was pulled down to zero. In fact, the effect is due to wave-mechanical tunneling, as shown by Fowler and Nordheim in 1928. But the SN barrier has now become the standard model for the tunneling barrier.
Later, in the context of semiconductor devices, it was suggested that a similar barrier should exist at the junction of a metal and a semiconductor. Such barriers are now widely known as Schottky barriers, and considerations apply to the transfer of electrons across them that are analogous to the older considerations of how electrons are emitted from a metal into vacuum. (Basically, several emission regimes exist, for different combinations of field and temperature. The different regimes are governed by different approximate formulae.)
When the whole behaviour of such interfaces is examined, it is found that they can act (asymmetrically) as a special form of electronic diode, now called a Schottky diode. In this context, the metal–semiconductor junction is known as a "Schottky (rectifying) contact'".
Schottky's contributions, in surface science/emission electronics and in semiconductor-device theory, now form a significant and pervasive part of the background to these subjects. It could possibly be argued that – perhaps because they are in the area of technical physics – they are not as generally well recognized as they ought to be.
He was awarded the Royal Society's Hughes medal in 1936 for his discovery of the Schrot effect (spontaneous current variations in high-vacuum discharge tubes, called by him the "Schrot effect": literally, the "small shot effect") in thermionic emission and his invention of the screen-grid tetrode and a superheterodyne method of receiving wireless signals.
In 1964 he received the Werner von Siemens Ring honoring his ground-breaking work on the physical understanding of many phenomena that led to many important technical appliances, among them tube amplifiers and semiconductors.
The invention of superheterodyne is usually attributed to Edwin Armstrong. However, Schottky published an article in the Proceedings of the IEEE that may indicate he had invented and patented something similar in Germany in 1918.[3]
Walter Schottky Institute (Germany) was named after him. The Walter H. Schottky prize is named after him.
Books written by Schottky[edit]
• Thermodynamik, Julius Springer, Berlin, Germany, 1929.
• Physik der Glühelektroden, Akademische Verlagsgesellschaft, Leipzig, 1928.
See also[edit]
1. ^ Welker, Heinrich (June 1976). "Walter Schottky". Physics Today 29 (6): 63–64. Bibcode:1976PhT....29f..63W. doi:10.1063/1.3023533.
2. ^ a b "Historically Speaking". Hifi World. April 2008. Retrieved April 2012.
3. ^ Schottky, Walter (October 1926). "On the Origin of the Super-Heterodyne Method". Proceedings of the IRE 14 (5): 695–698. doi:10.1109/JRPROC.1926.221074.
External links[edit] |
4d57f09518d52283 | Sign up ×
Let's suppose I have a Hilbert space $K = L^2(X)$ equipped with a Hamiltonian $H$ such that the Schrödinger equation with respect to $H$ on $K$ describes some boson I'm interested in, and I want to create and annihilate a bunch of these bosons. So I construct the bosonic Fock space
$$S(K) = \bigoplus_{i \ge 0} S^i(K)$$
where $S^i$ denotes the $i^{th}$ symmetric power. (Is this "second quantization"?) Feel free to assume that $H$ has discrete spectrum.
What is the new Hamiltonian on $S(K)$ (assuming that the bosons don't interact)? How do observables on $K$ translate to $S(K)$?
I'm not entirely sure this is a meaningful question to ask, so feel free to tell me that it's not and that I have to postulate some mechanism by which creation and/or annihilation actually happens. In that case, I would love to be enlightened about how to do this.
Now, various sources (Wikipedia, the Feynman lectures) inform me that $S(K)$ is somehow closely related to the Hilbert space of states of a quantum harmonic oscillator. That is, the creation and annihilation operators one defines in that context are somehow the same as the creation and annihilation operators one can define on $S(K)$, and maybe the Hamiltonians even look the same somehow.
Why is this? What's going on here?
Assume that I know a teensy bit of ordinary quantum mechanics but no quantum field theory.
share|cite|improve this question
Hello Qiaochu, welcome to physics.SE! Nice question and I hope we can expect many more :-) – Marek Jan 8 '11 at 3:31
What is $S^i(K)$ @Qiaochu ? – user346 Jan 8 '11 at 5:10
@space_cadet: the i^{th} symmetric power, i.e. the Hilbert space of states of i identical bosons. – Qiaochu Yuan Jan 8 '11 at 13:14
Ah ok. In the physics literature $H$, almost always, denotes the Hamiltonian and $S$ the action. – user346 Jan 8 '11 at 13:25
$H$ on $Sym^2(K)$ is really $H\otimes 1 + 1 \otimes H$ and likewise for $a$ and $a^\dagger$. So for example the energy is the sum of the (uncoupled) energies. You might have expected $H\otimes H$, for example, but $H$ generates an infinitesimal translation in time. Exponentiating gives the expected result on the propogator $U = exp(tH)$ as $U\otimes U.$ – Eric Zaslow Jan 8 '11 at 20:20
3 Answers 3
up vote 3 down vote accepted
Let's discuss the harmonic oscillator first. It is actually a very special system (one and only of its kind in whole QM), itself being already second quantized in a sense (this point will be elucidated later).
First, a general talk about HO (skip this paragraph if you already know them inside-out). It's possible to express its Hamiltonian as $H = \hbar \omega(N + 1/2)$ where $N = a^{\dagger} a$ and $a$ is a linear combination of momentum and position operator). By using the commutation relations $[a, a^{\dagger}] = 1$ one obtains basis $\{ \left| n \right >$ | $n \in {\mathbb N} \}$ with $N \left | n \right > = n$. So we obtain a convenient interpretation that this basis is in fact the number of particles in the system, each carrying energy $\hbar \omega$ and that the vacuum $\left | 0 \right >$ has energy $\hbar \omega \over 2$.
Now, the above construction was actually the same as yours for $X = \{0\}$. Fock's construction (also known as second quantization) can be understood as introducing particles, $S^i$ corresponding to $i$ particles (so HO is a second quantization of a particle with one degree of freedom). In any case, we obtain position-dependent operators $a(x), a^{\dagger}(x), N(x)$ and $H(x)$ which are for every $x \in X$ isomorphic to HO operators discussed previously and also obtain base $\left | n(x) \right >$ (though I am actually not sure this is base in the strict sense of the word; these affairs are not discussed much in field theory by physicists). The total hamiltonian $H$ will then be an integral $H = \int H(x) dx$. The generic state in this system looks like a bunch of particles scattered all over and this is in fact particle description of a free bosonic field.
share|cite|improve this answer
I realize I left your original Hamiltonian $H$ out of the discussion. I'll add that to the answer later. For now note that $x$ is in no way special in the above, we could have used other "basis" of $K$ like momentum and in particular energy basis of the $H$. In that case the relevant states for $S(K)$ become $\left | n_0 n_1 \cdots \right>$ with $n_i$ telling us how many particles are in the state with energy $E_i$. – Marek Jan 8 '11 at 4:07
@Marek: thanks! I would definitely appreciate some pointers about exactly what to do with the original Hamiltonian. Some follow-up questions: are the creation and annihilation operators observables? Is number going to turn out to be a conserved quantity in the general case? – Qiaochu Yuan Jan 8 '11 at 14:05
@Marek: and one more question. Given an observable A on K, what's the corresponding observable on S(K)? I can think of a few different possibilities and I'm not sure which one physicists actually use. – Qiaochu Yuan Jan 8 '11 at 14:27
@Qiaochu: true, but I thought you were asking how to promote observables from $K$ to $S(K)$. $N(\lambda)$ are completely new operators than need the structure of $S(K)$ to be defined. As for interactions: well, that is a topic for a one-semester course in quantum field theory so I recommend you ask this as a separate question. But in short: in general any $H_I$ is possible. But physical ones need to conserve energy, momentum and in fact complete Poincaré symmetry. So one uses representations of Poincaré group to restrict possible choices of $H_I$. – Marek Jan 8 '11 at 15:24
@Qiaochu: (cont.) in the end it turns out that it's really ineffective to work in this way and one is forced to pass to the language of fields. One can quantize classical fields (again enforcing Poincaré and perhaps other, gauge symmetries) by usual means (canonical quantization, path-integral, etc.) and in the end one can decompose the Hilbert space into particles (in the Fock sense) and $H_I$ falls out. In any case, there is still a lot of room for possible interactions and to get a taste, see e.g. QED Lagrangian. – Marek Jan 8 '11 at 15:30
Reference: Fetter and Walecka, Quantum Theory of Many Particle Systems, Ch. 1
The Hamiltonian for a SHO is:
$$ H = \sum_{i = 0}^{\infty}\hbar \omega ( a_i^{+} a_i + \frac{1}{2} ) $$
where $\{a^+_i, a_i\}$ are the creation and annihilation operators for the $i^\textrm{th}$ eigenstate (momentum mode). The Fock space $\mathbf{F}$ consists of states of the form:
$$ \vert n_{a_0},n_{a_1}, ...,n_{a_N} \rangle $$
which are obtained by repeatedly acting on the vacuum $\vert 0 \rangle $ by the ladder operators:
$$ \Psi = \vert n_{i_0},n_{i_1}, ...,n_{i_N} \rangle = (a_0^+)^{i_0} (a_1^+)^{i_1} \ldots (a_N^+)^{i_N} \vert 0 \rangle $$
The interpretation of $\Psi$ is as the state which contains $i_k$ quanta of the $k^\textrm{th}$ eigenstate created by application of $(a^+_k)^{i_k}$ on the vacuum.
The above state is not normalized until multiplied by factor of the form $\prod_{k=0}^N \frac{1}{\sqrt{k+1}}$. If your excitations are bosonic you are done, because the commutator of the ladder operators $[a^+_i,a_j] = \delta_{ij}$ vanishes for $i\ne j$. However if the statistics of your particles are non-bosonic (fermionic or anyonic) then the order, in which you act on the vacuum with the ladder operators, matters.
Of course, to construct a Fock space $\mathbf{F}$ you do not need to specify a Hamiltonian. Only the ladder operators with their commutation/anti-commutation relations are needed. In usual flat-space problems the ladder operators correspond to our usual fourier modes $ a^+_k \Rightarrow \exp ^{i k x} $. For curved spacetimes this can procedure can be generalized by defining our ladder operators to correspond to suitable positive (negative) frequency solutions of a laplacian on that space. For details, see Wald, QFT in Curved Spacetimes. Now, given any Hamiltonian of the form:
$$ H = \sum_{k=1}^{N} T(x_k) + \frac{1}{2} \sum_{k \ne l = 1}^N V(x_k,x_l) $$
with a kinetic term $T$ for a particle at $x_k$ and a pairwise potential term $V(x_k,x_l)$, one can write down the quantum Hamiltonian in terms of matrix elements of these operators:
$$ H = \sum_{ij} a^+_i \langle i \vert T \vert j \rangle a_i + \frac{1}{2}a^+_i a^+_j \langle ij \vert V \vert kl \rangle a_l a_k $$
where $|i\rangle$ is the state with a single excited quantum corresponding the action of $a^+_i$ on the vacuum. (For details, steps, see Fetter & Walecka, Ch. 1).
I hope this helps resolves some of your doubts. Being as you are from math, there are bound to be semantic differences between my language and yours so if you have any questions at all please don't hesitate to ask.
share|cite|improve this answer
Can you explain the notation in that last formula? What are the b_i? – Qiaochu Yuan Jan 8 '11 at 18:00
@qiaochu that was a typo. Its fixed now. – user346 Jan 8 '11 at 20:29
As recently as 10 years ago Welecka was still teaching as William & Mary. It's worth taking his course. Any course. Or even going to see a talk. Really. – dmckee Jan 8 '11 at 23:00
Suppose, as you do, that $K$ is the space of states of a single boson. Then the space of states of a combined system of two bosons is not $K\otimes K$ as it would be if the two bosons were distinguishable, it is the symmetric subspace which you are denoting as $S^2$. Your sum over all $i$, which you denote $S$, is then a HIlbert space (state space) of a new system whose states contain the states of one-boson system, a two-boson system, a three-boson system, etc. except not an infinite number of bosons. (that is not included in the space $S$). And your space $S$ includes superpositions, for example if $v_1$ is an element of $S$ (a state of one boson) and if $v_3 \in S^3$ (a state of a three boson system) then $0.707 v_1 - -.707 v_3$ is a state which has a fifty per cent. probability of being one boson, if the number of particles is measured, and a fifty per cent. probability of being found to be three bosons. That is the physical meaning of Fock space. It is the state space on which the operators of a quantum field act.
As already remarked by Eric Zaslow, if $H$ is the Hamiltonian of the h.o. $K$, then by definition, $H\otimes I + I \otimes H$ is the Hamiltonian on $S^2$, etc. on each $S^i$. Then one sums them all up to get a Hamiltonian on the direct sum $S$.
Unless this Hamiltonian is perturbed, the number of particles is constant, obviously, since it preserves each subspace $S^i$ of $S$. So there will be no creation or annihilation of pairs of particles. If this field comes into interaction with an extraneous particle, the Hamiltonian will be perturbed of course.
It is connected with second quantisation as follows: if you have a classical h.o. and quantise it, you get $K$. If you now second quantise $K$, you get $S$ which can be regarded as a quantum field. Sir James Jeans showed, before the quantum revolution, that the classical electromagnetic field could be obtained from the classical mechanics h.o. as a limit of more and more classical h.o.'s not interacting with each other, and this procedure of second quantisation is a quantum analogue. It is not the same procedure as if you start with a classical field and then quantise it. But it is remarkable that you can get the same anser either way, as JEans noticed in the classical case. That is, you started with a quantum one-particle system and passed to Fock space and got the quantum field theory corresponding to that system. But we could have started with a classical field and quantised it, and gotten the quantum field that way.
share|cite|improve this answer
Your Answer
|
03c2547e0e1346b5 | Psychology Wiki
Schrödinger's cat
34,136pages on
this wiki
Revision as of 14:45, March 3, 2007 by Dr Joe Kiff (Talk | contribs)
Schrödinger's Cat: If the nucleus in the bottom left decays, the geiger counter on its right will sense it and trigger the release of the gas. In one hour, there is a 50% chance that the nucleus will decay, and therefore that the gas will be released and kill the cat.
Schrödinger's cat is a seemingly paradoxical thought experiment devised by Erwin Schrödinger that attempts to illustrate the incompleteness of an early interpretation of quantum mechanics when going from subatomic to macroscopic systems. Schrödinger proposed his "cat" after debates with Albert Einstein over the Copenhagen interpretation, which Schrödinger defended, stating in essence that if a scenario existed where a cat could be so isolated from external interference (decoherence), the state of the cat can only be known as a superposition (combination) of possible rest states (eigenstates), because finding out (measuring the state) cannot be done without the observer interfering with the experiment — the measurement system (the observer) is entangled with the experiment.
The thought experiment serves to illustrate the strangeness of quantum mechanics and the mathematics necessary to describe quantum states. The idea of a particle existing in a superposition of possible states, while a fact of quantum mechanics, is a concept that does not scale to large systems (like cats), which are not indeterminably probabilistic in nature. Philosophically, these positions which emphasise either probability or determined outcomes are called (respectively) positivism and determinism.
The experiment Edit
Schrödinger wrote:
An illustration of both states, a dead and living cat. According to quantum theory, after an hour the cat is in a quantum superposition of coexisting alive and dead states. Yet when we look in the box we expect to only see one of the states, not a mixture of them.
Schrödinger cat
The experiment must be shielded from the environment to prevent quantum decoherence from inducing wavefunction collapse.
The above text is a translation of two paragraphs from within a much larger original article, which appeared in the German magazine Naturwissenschaften ("Natural Sciences") in 1935: E. Schrödinger: "Die gegenwärtige Situation in der Quantenmechanik" ("The present situation in quantum mechanics"), Naturwissenschaften, 48, 807, 49, 823, 50, 844 (November 1935). It was intended as a discussion of the EPR article published by Einstein, Podolsky and Rosen in the same year. Apart from introducing the cat, Schrödinger also coined the term "entanglement" (German: Verschränkung) in his article.
In posing this Schrödinger asked the question: when does a quantum system stop existing as a mixture of states and become one or the other? (More technically, when does the actual quantum state stop being a linear combination of states, each of which resemble different classical states, and instead begin to have a unique classical description?) If the cat survives, it remembers only being alive. But explanations of the EPR experiments that are consistent with standard microscopic quantum mechanics require that macroscopic objects, such as cats and notebooks, do not always have unique classical descriptions. The purpose of the thought experiment is to illustrate this apparent paradox: our intuition says that no observer can be in a mixture of states, yet it seems cats can be such a mixture. Are cats required to be observers, or does their existence in a single well-defined classical state require another external observer? Each alternative seemed absurd to Albert Einstein, who was impressed by the ability of the thought experiment to highlight these issues; in a letter to Schrödinger dated 1950 he wrote:
You are the only contemporary physicist, besides Laue, who sees that one cannot get around the assumption of reality—if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality—reality as something independent of what is experimentally established. Their interpretation is, however, refuted most elegantly by your system of radioactive atom + amplifier + charge of gun powder + cat in a box, in which the psi-function of the system contains both the cat alive and blown to bits. Nobody really doubts that the presence or absence of the cat is something independent of the act of observation.
But perhaps it was inevitable that Einstein would be impressed with Schrödinger's cat—Einstein had previously suggested to Schrödinger a similar paradox involving an unstable keg of gunpowder, instead of a cat. Schrödinger had taken the next step of applying quantum mechanics to an entity that may or may not be conscious, to further illustrate the putative incompleteness of quantum mechanics.
Copenhagen interpretationEdit
Quantum physics
Quantum psychology
Schrödinger cat
Quantum mechanics
Introduction to...
Mathematical formulation of...
Fundamental concepts
Decoherence · Interference
Uncertainty · Exclusion
Transformation theory
Ehrenfest theorem · Measurement
Double-slit experiment
Davisson-Germer experiment
Stern–Gerlach experiment
EPR paradox · Schrodinger's Cat
Schrödinger equation
Pauli equation
Klein-Gordon equation
Dirac equation
Advanced theories
Quantum field theory
Quantum electrodynamics
Quantum chromodynamics
Quantum gravity
Feynman diagram
Copenhagen · Quantum logic
Hidden variables · Transactional
Many-worlds · Many-minds · Ensemble
Consistent histories · Relational
Consciousness causes collapse
Orchestrated objective reduction
Bohm ·
In the Copenhagen interpretation, a system stops being a superposition of states and becomes either one or the other when an observation takes place. This experiment makes apparent the fact that the nature of measurement, or observation, is not well defined in this interpretation. Some interpret the experiment to mean that while the box is closed, the system simultaneously exists in a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat", and that only when the box is opened and an observation performed does the wave function collapse into one of the two states. More intuitively, some feel that the "observation" is taken when a particle from the nucleus hits the detector. Recent developments in quantum physics show that measurements of quantum phenomena taken by non-conscious "observers" (such as a wiretap) most definitely alter the quantum state of the phenomena from the point of view of conscious observers reading the wiretap, lending support to this idea.
A precise rule is that probability enters at the point where the classical approximation is first used to describe the system - almost by tautology, as the classical approximation is just a simplification of the quantum mathematics, and so must introduce imprecision in the measurement, which can be viewed as probability. Note, however, that this only applies to descriptions of the system, not the system itself. The cat is both 100% alive and 100% dead at the same time due to quantum theory.
Under Copenhagen, the amount of uncertainty for a complex quantum system is predicted by quantum decoherence. Particles which exchange photons (and possibly other atomic or subatomic particles) become entangled with each other from the point of view of an observer, meaning that these particles can only be described accurately with reference to each other, which decreases the total uncertainty of those particles from the point of view of our observer. By the time one has reached "macroscopic" levels - such as a cat, which is made up of a number of atomic particles almost too large to express with words - so many particles have become entangled with each other so as to decrease the uncertainty to almost zero. (Quantum effects in huge collections of particles are only seen in very rare, and often man-made, situations, such as a Bose-Einstein condensate). Thus, at least from the point of view of the observer, any improbability regarding the cat as a system of quantum particles has disappeared due to the massive amount of entanglement between all of the particles that make it up, meaning that the cat does not truly exist as both alive and dead at the same time, at least from the point of view of any observer viewing the cat.
Even before observation was noted to be fundamentally distinct from consciousness through experimentation, the experiment always contained at least two "observers" - the physicist and the cat. Even had the physicist been unaware of the cat's state in the hypothetical experiment, one would have had to posit that the cat, at least, would have been quite sure of its status (at least, as long as the gas had not yet ended its ability to "observe"). However, since "observation" has been shown by experiment to have nothing to do with consciousness - or at the very least, any traditional definition of consciousness - most conjecture along these lines probably falls under the "interesting but physically irrelevant" category.
Steven Weinberg in "Einstein's Mistakes", Physics Today, November 2005, page 31, said:
All this familiar story is true, but it leaves out an irony. Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wavefunction (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?
Everett many-worlds interpretation & consistent historiesEdit
In the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process, both states persist, but decoherent from each other. When an observer opens the box, he becomes entangled with the cat, so observer-states corresponding to the cat being alive and dead are formed, and each can have no interaction with the other. The same mechanism of quantum decoherence is also important for the interpretation in terms of Consistent Histories. Only the "dead cat" or "alive cat" can be a part of a consistent history in this interpretation.
In other words, when the box is opened, the universe (or at least the part of the universe containing the observer and cat) is split into two separate universes, one containing an observer looking at a box with a dead cat, one containing an observer looking at a box with a live cat.
Ensemble interpretationEdit
In the Ensemble Interpretation, the Schrödinger's cat paradox is a trivial non issue. In this interpretation, the state vector does not apply to individual cat experiments, it only applies to the statistics of many similar prepared cat experiments.
Indeed, the cat paradox was specifically constructed by Schrödinger to illustrate that the Copenhagen Interpretation suffered fundamental problems. It was not intended as an example that quantum mechanics actually predicted that a cat could be alive and dead simultaneously, though some have made this further assumption.
Practical applicationsEdit
The experiment is a purely theoretical one, and the machine proposed is not known to have been constructed.
This has some practical use in quantum computing and quantum cryptography. It is possible to send light that is in a superposition of states down a fiber optic cable. Placing a wiretap in the middle of the cable which intercepts and retransmits the transmission will collapse the wavefunction (in the Copenhagen interpretation, "perform an observation") and cause the light to fall into one state or another. By performing statistical tests on the light received at the other end of the cable, one can tell whether it remains in the superposition of states or has already been observed and retransmitted. In principle, this allows the development of communication systems that cannot be tapped without the tap being noticed at the other end. This experiment can be argued to illustrate that "observation" in the Copenhagen interpretation has nothing to do with consciousness, in that a perfectly unconscious wiretap will cause the statistics at the end of the wire to be different. Yet, one still cannot factor out the observation of the wiretap as having an effect upon the outcome.
In quantum computing, the phrase "cat state" often refers to the special entanglement of qubits where the qubits are in an equal superposition of all being 0 and all being 1, i.e. |00...0\rangle + |11...1\rangle.
A variant of the Schrödinger's Cat experiment known as the quantum suicide machine has been proposed by cosmologist Max Tegmark. It examines the Schrödinger's Cat experiment from the point of view of the cat, and argues that this may be able to distinguish between the Copenhagen interpretation and many worlds. Another variant on the experiment is Wigner's friend.
Physicist Stephen Hawking once exclaimed, "When I hear of Schrödinger's cat, I reach for my gun," paraphrasing German playwright and Nazi "Poet Laureate", Hanns Johst's famous phrase "Wenn ich 'Kultur' höre, entsichere ich meine Browning!" ("When I hear the word 'culture', I release the safety on my Browning!")
In fact, Hawking and many other physicists are of the opinion that the "Copenhagen School" interpretation of quantum mechanics unduly stresses the role of the observer. Still, a final consensus on this point among physicists seems to be out of reach.
See also Edit
External linksEdit
Wikimedia Commons has media related to:
Simple:Schrödinger's cat
Around Wikia's network
Random Wiki |
1f9f3b053e2a3625 | Search tips
Search criteria
Nanoscale Res Lett. 2012; 7(1): 531.
Published online Sep 26, 2012. doi: 10.1186/1556-276X-7-531
PMCID: PMC3477088
Ramón Manjarres-García,1 Gene Elizabeth Escorcia-Salas,1 Javier Manjarres-Torres,1 Ilia D Mikhailov,2 and José Sierra-Ortegacorresponding author1
1Group of Investigation in Condensed Matter Theory, Universidad del Magdalena, Santa Marta, Colombia
2Universidad Industrial de Santander, A. A. 678, Bucaramanga, Colombia
corresponding authorCorresponding author.
Ramón Manjarres-García: ramonmanjarres71/at/; Gene Elizabeth Escorcia-Salas: elizabethescorcia/at/; Javier Manjarres-Torres: javiermanjarres27/at/; Ilia D Mikhailov: mikhail2811/at/; José Sierra-Ortega: jsierraortega/at/
Received July 10, 2012; Accepted August 29, 2012.
Keywords: Quantum dots, Adiabatic approximation, Artificial molecule, PACS, 78.67.-n, 78.67.Hc, 73.21.-b
Figure 1
Figure 1
Scheme of the artificial hydrogen-like molecule.
equation M1
Besides, for the sake of simplicity, we consider a model with infinite barrier confinement, which is defined in cylindrical coordinates as equation M2 if equation M3, and equation M4 otherwise.
Given that the thicknesses of the layers are much smaller than their lateral dimensions, one can take advantage of the adiabatic approximation in order to exclude from consideration the rapid particle motions along the z-axis [6,7] and obtain the following expression for effective Hamiltonian in polar coordinates:
equation M5
The effective Bohr radius equation M6 as the unit of length, the effective Rydberg equation M7 as the energy unit, and equation M8 as the unit of the magnetic field strength have been used in Hamiltonian (Equation 2), with equation M9 being the electron effective mass and ϵ, the dielectric constant. The polar coordinates equation M10 labeled by equation M11 correspond to the first and the second electrons, respectively. It is seen that for the selected particular profile given by Equation 1, the Hamiltonian (Equation 2) coincides with one which describes two particles in 2D quantum dot with parabolic confinement and renormalized interaction. It is well known that such Hamiltonian may be separated by using the center of mass, equation M12, and the relative, equation M13 coordinates [8]:
equation M14
The wave function is factorized into two parts, equation M15, describing the center of mass and the relative motions, respectively. Meanwhile, the total energy splits into two terms depending on two radial equation M16 and two azimuthal equation M17 quantum numbers:
equation M18
where the first term represents the well-known expression for the exact energy levels of a two-dimensional harmonic oscillator, labeled by the radial equation M19 and azimuthal equation M20 quantum numbers for the center of mass motion and the relative motion energy equation M21 must be found solving the following one-dimensional Schrödinger equation:
equation M22
Before the results are shown and discussed, it is useful to specify the labeling of quantum levels of the two-electron molecular complex. According to Equation 4, the energy levels equation M23 can be labeled by four symbols equation M24. Even and odd lp correspond to the spin singlet and triplet states, respectively, consistent with the Pauli Exclusion Principle.
Figure 2
Figure 2
In order to verify this hypothesis, we present in Figure Figure33 the calculated molecular complex energies equation M28 of some lower levels as functions of the magnetic field strength for QDs with small R = 40 nm (upper curves) and large R = 100 nm radii (lower curves).
Figure 3
Figure 3
Figure 4
Figure 4
Figure 5
Figure 5
Competing interests
The authors declare that they have no competing interests
Authors' contributions
• Kramer B. Proceedings of a NATO Advanced Study Institute on Quantum Coherence in Mesoscopic System: 1990 April 2-13; Les Arcs, France. New York: Plenum; 1991.
• Maksym PA, Chakraborty T. Quantum dots in a magnetic field: role of electron–electron interactions. Phys Rev Lett. 1990;65:108–111. doi: 10.1103/PhysRevLett.65.108. [PubMed] [Cross Ref]
• Pfannkuche D, Gudmundsson V, Maksym P. Comparison of a Hartree, a Hartree-Fock, and an exact treatment of quantum-dot helium. Phys Rev B. 1993;47:2244–2250. doi: 10.1103/PhysRevB.47.2244. [PubMed] [Cross Ref]
• Zhu JL, Yu JZ, Li ZQ, Kawasoe Y. Exact solutions of two electrons in a quantum dot. J Phys Condens Matter. 1996;8:7857. doi: 10.1088/0953-8984/8/42/005. [Cross Ref]
• Mikhailov ID, Betancur FJ. Energy spectra of two particles in a parabolic quantum dot: numerical sweep method. Phys stat sol (b) 1999;213:325–332. doi: 10.1002/(SICI)1521-3951(199906)213:2<325::AID-PSSB325>3.0.CO;2-W. [Cross Ref]
• Peeters FM, Schweigert VA. Two-electron quantum disks. Phys Rev B. 1996;53:1468–1474. doi: 10.1103/PhysRevB.53.1468. [PubMed] [Cross Ref]
• Mikhailov ID, Marín JH, García F. Off-axis donors in quasi-two-dimensional quantum dots with cylindrical symmetry. Phys stat sol (b) 2005;242(8):1636–1649. doi: 10.1002/pssb.200540053. [Cross Ref]
• Betancur FJ, Mikhailov ID, Oliveira LE. Shallow donor states in GaAs-(Ga, Al)As quantum dots with different potential shapes. J Appl Phys D. 1998;31:3391. doi: 10.1088/0022-3727/31/23/013. [Cross Ref]
Articles from Nanoscale Research Letters are provided here courtesy of |
3c766cac78915b28 | Electron configuration
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals.1 For example, the electron configuration of the neon atom is 1s2 2s2 2p6.
Electronic configurations describe electrons as each moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, an energy is associated with each electron configuration and, upon certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. The concept is also useful for describing the chemical bonds that hold atoms together. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells
See also: Electron shell
s (=0) p (=1)
m=0 m=0 m=±1
s pz px py
n=1 S1M0.png
n=2 S2M0.png P2M0.png P2M1.png P2M-1.png
Electron configuration was first conceived of under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow).
A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell.
The numbers of electrons that can occupy each shell and each subshell arises from the equations of quantum mechanics,2 in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.3
See also: Atomic orbital
Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic orbital labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each orbital (or set of orbitals sharing the same label) placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-s-two, two-s-one"). Phosphorus (atomic number 15) is as follows: 1s2 2s2 2p6 3s2 3p3.
For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used, since all but the last few subshells are identical to those of one or another of the noble gases. Phosphorus, for instance, differs from neon (1s2 2s2 2p6) only by the presence of a third shell. Thus, the electron configuration of neon is pulled out, and phosphorus is written as follows: [Ne] 3s2 3p3. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element.
For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s2 3d2 or [Ar] 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies that is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti4+, Ti3+, Ti2+, Ti+, Ti.
The superscript 1 for a singly occupied orbital is not compulsory. It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fundamental" (or "fine"), based on their observed fine structure: their modern usage indicates orbitals with an azimuthal quantum number, l, of 0, 1, 2 or 3 respectively. After "f", the sequence continues alphabetically "g", "h", "i"... (l = 4, 5, 6...), skipping "j", although orbitals of these types are rarely required.45
The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below).
Energy — ground state and excited states
The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state. Any other configuration is an excited state.
As an example, the ground state configuration of the sodium atom is 1s22s22p63s, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p orbital, to obtain the 1s22s22p63p configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm.
Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to x-ray photons. This would be the case for example to excite a 2p electron to the 3s level and form the excited 1s22s22p53s2 configuration.
The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule.
Niels Bohr (1923) was the first to propose that the periodicity in the properties of the elements might be explained by the electronic structure of the atom.6 His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6).
The following year, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect).
Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli realized that the Zeeman effect must be due only to the outermost electrons of the atom, and was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925):8
It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k l, j ml and m ms.
The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom:2 this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936),9 see below) for the order in which atomic orbitals are filled with electrons.
Atoms: Aufbau principle and Madelung rule
The Aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:10
a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy orbitals are filled before electrons are placed in higher-energy orbitals.
The approximate order of filling of atomic orbitals, following the arrows from 1s to 7p. (After 7p the order includes orbitals outside the range of the diagram, starting with 8s.)
The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936,9 and later given a theoretical justification by V.M. Klechkowski11
1. Orbitals are filled in the order of increasing n+l;
2. Where two orbitals have the same value of n+l, they are filled in order of increasing n.
This gives the following order for filling the orbitals:
In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (Uuo, Z = 118).
The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics and nuclear chemistry.
Periodic table
Electron configuration table
The form of the periodic table is closely related to the electron configuration of the atoms of the elements. For example, all the elements of group 2 have an electron configuration of [E] ns2 (where [E] is an inert gas configuration), and have notable similarities in their chemical properties. In general, the periodicity of the periodic table in terms of periodic table blocks is clearly due to the number of electrons (2, 6, 10, 14...) needed to fill s, p, d, and f subshells.
The outermost electron shell is often referred to as the "valence shell" and (to a first approximation) determines the chemical properties. It should be remembered that the similarities in the chemical properties were remarked on more than a century before the idea of electron configuration.12 It is not clear how far Madelung's rule explains (rather than simply describes) the periodic table,13 although some properties (such as the common +2 oxidation state in the first row of the transition metals) would obviously be different with a different order of orbital filling.
Shortcomings of the Aufbau principle
The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly14 (although there are mathematical approximations available, such as the Hartree–Fock method).
The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogen-like atom, which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects of the Lamb shift.)
Ionization of the transition metals
The naïve application of the Aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s1 and [Ar] 4s2 respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n+l = 4 (n = 4, l = 0) while the 3d-orbital has n+l = 5 (n = 3, l = 2). After calcium, most neutral atoms in the first series of transition metals (Sc-Zn) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d5 4s1 and [Ar] 3d10 4s1 respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons".
The apparent paradox arises when electrons are removed from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals.15 The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe4+, Fe3+, Fe2+, Fe+, Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ...
This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly doesn't. There is no special reason why the Fe2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree-Fock method of atomic structure calculation.16
Similar ion-like 3dx4s0 configurations occur in transition metal complexes as described by the simple crystal field theory, even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands. The electron configuration of the central chromium atom is described as 3d6 with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic, meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory, the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom.
Other exceptions to Madelung's rule
There are several more exceptions to Madelung's rule among the heavier elements, and it is more and more difficult to resort to simple explanations, such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations,17 which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects18 tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals.19 The table below shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page).
Electron shells filled in violation of Madelung's rule20 (red)
Period 4 Period 5 Period 6 Period 7
Lanthanum 57 Xe 6s2 5d1 Actinium 89 Rn 7s2 6d1
Cerium 58 Xe 6s2 4f1 5d1 Thorium 90 Rn 7s2 6d2
Praseodymium 59 Xe 6s2 4f3 Protactinium 91 Rn 7s2 5f2 6d1
Neodymium 60 Xe 6s2 4f4 Uranium 92 Rn 7s2 5f3 6d1
Promethium 61 Xe 6s2 4f5 Neptunium 93 Rn 7s2 5f4 6d1
Samarium 62 Xe 6s2 4f6 Plutonium 94 Rn 7s2 5f6
Europium 63 Xe 6s2 4f7 Americium 95 Rn 7s2 5f7
Gadolinium 64 Xe 6s2 4f7 5d1 Curium 96 Rn 7s2 5f7 6d1
Terbium 65 Xe 6s2 4f9 Berkelium 97 Rn 7s2 5f9
Scandium 21 Ar 4s2 3d1 Yttrium 39 Kr 5s2 4d1 Lutetium 71 Xe 6s2 4f14 5d1 Lawrencium 103 Rn 7s2 5f14 7p1
Titanium 22 Ar 4s2 3d2 Zirconium 40 Kr 5s2 4d2 Hafnium 72 Xe 6s2 4f14 5d2 Rutherfordium 104 Rn 7s2 5f14 6d2
Vanadium 23 Ar 4s2 3d3 Niobium 41 Kr 5s1 4d4 Tantalum 73 Xe 6s2 4f14 5d3
Chromium 24 Ar 4s1 3d5 Molybdenum 42 Kr 5s1 4d5 Tungsten 74 Xe 6s2 4f14 5d4
Manganese 25 Ar 4s2 3d5 Technetium 43 Kr 5s2 4d5 Rhenium 75 Xe 6s2 4f14 5d5
Iron 26 Ar 4s2 3d6 Ruthenium 44 Kr 5s1 4d7 Osmium 76 Xe 6s2 4f14 5d6
Cobalt 27 Ar 4s2 3d7 Rhodium 45 Kr 5s1 4d8 Iridium 77 Xe 6s2 4f14 5d7
Nickel 28 Ar 4s2 3d8 or
Ar 4s1 3d9 (disputed)21
Palladium 46 Kr 4d10 Platinum 78 Xe 6s1 4f14 5d9
Copper 29 Ar 4s1 3d10 Silver 47 Kr 5s1 4d10 Gold 79 Xe 6s1 4f14 5d10
Zinc 30 Ar 4s2 3d10 Cadmium 48 Kr 5s2 4d10 Mercury 80 Xe 6s2 4f14 5d10
The electron-shell configuration of elements beyond rutherfordium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120.22
Electron configuration in molecules
In molecules, the situation becomes more complex, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry,23 rather than the atomic orbital labels used for atoms and monatomic ions: hence, the electron configuration of the dioxygen molecule, O2, is written 1σg2 1σu2 2σg2 2σu2 3σg2 1πu4 1πg2,2425 or equivalently 1σg2 1σu2 2σg2 2σu2 1πu4 3σg2 1πg2.1 The term 1πg2 represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory.
The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings.
Electron configuration in solids
In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory.
The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified form of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form.
This approach is taken further in computational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using an ever larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the Aufbau principle. Not all methods in calculational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method that discards the model.
For atoms or molecules with more than one electron, the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems.
A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to supplement the electron configuration with one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined.
See also
1. ^ a b IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "configuration (electronic)".
2. ^ a b In formal terms, the quantum numbers n, and m arise from the fact that the solutions to the time-independent Schrödinger equation for hydrogen-like atoms are based on spherical harmonics.
4. ^ Weisstein, Eric W. (2007). "Electron Orbital". wolfram.
5. ^ Ebbing, Darrell D.; Gammon, Steven D. (2007-01-12). General Chemistry. p. 284. ISBN 978-0-618-73879-3.
6. ^ Bohr, Niels (1923). "Über die Anwendung der Quantumtheorie auf den Atombau. I". Zeitschrift für Physik 13: 117. Bibcode:1923ZPhy...13..117B. doi:10.1007/BF01328209.
7. ^ Stoner, E.C. (1924). "The distribution of electrons among atomic levels". Philosophical Magazine (6th Ser.) 48 (286): 719–36. doi:10.1080/14786442408634535.
8. ^ Pauli, Wolfgang (1925). "Über den Einfluss der Geschwindigkeitsabhändigkeit der elektronmasse auf den Zeemaneffekt". Zeitschrift für Physik 31: 373. Bibcode:1925ZPhy...31..373P. doi:10.1007/BF02980592. English translation from Scerri, Eric R. (1991). "The Electron Configuration Model, Quantum Mechanics and Reduction" (PDF). Br. J. Phil. Sci. 42 (3): 309–25. doi:10.1093/bjps/42.3.309.
9. ^ a b Madelung, Erwin (1936). Mathematische Hilfsmittel des Physikers. Berlin: Springer.
11. ^ Wong, D. Pan (1979). "Theoretical justification of Madelung's rule". Journal of Chemical Education 56 (11): 714–18. Bibcode:1979JChEd..56..714W. doi:10.1021/ed056p714.
12. ^ The similarities in chemical properties and the numerical relationship between the atomic weights of calcium, strontium and barium was first noted by Johann Wolfgang Döbereiner in 1817.
13. ^ Scerri, Eric R. (1998). "How Good Is the Quantum Mechanical Explanation of the Periodic System?" (PDF). Journal of Chemical Education 75 (11): 1384–85. Bibcode:1998JChEd..75.1384S. doi:10.1021/ed075p1384. Ostrovsky, V.N. (2005). "On Recent Discussion Concerning Quantum Justification of the Periodic Table of the Elements". Foundations of Chemistry 7 (3): 235–39. doi:10.1007/s10698-005-2141-y.
14. ^ Electrons are identical particles, a fact that is sometimes referred to as "indistinguishability of electrons". A one-electron solution to a many-electron system would imply that the electrons could be distinguished from one another, and there is strong experimental evidence that they can't be. The exact solution of a many-electron system is a n-body problem with n ≥ 3 (the nucleus counts as one of the "bodies"): such problems have evaded analytical solution since at least the time of Euler.
15. ^ There are some cases in the second and third series where the electron remains in an s-orbital.
16. ^ Melrose, Melvyn P.; Scerri, Eric R. (1996). "Why the 4s Orbital is Occupied before the 3d". Journal of Chemical Education 73 (6): 498–503. Bibcode:1996JChEd..73..498M. doi:10.1021/ed073p498.
18. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "relativistic effects".
19. ^ Pyykkö, Pekka (1988). "Relativistic effects in structural chemistry". Chem. Rev. 88 (3): 563–94. doi:10.1021/cr00085a006.
20. ^ Miessler, G. L.; Tarr, D. A. (1999). Inorganic Chemistry (2nd ed.). Prentice-Hall. p. 38.
23. ^ The labels are written in lowercase to indicate that they correspond to one-electron functions. They are numbered consecutively for each symmetry type (irreducible representation in the character table of the point group for the molecule), starting from the orbital of lowest energy for that type.
24. ^ Levine I.N. Quantum Chemistry (4th ed., Prentice Hall 1991) p.376 ISBN 0-205-12770-3
25. ^ Miessler G.L. and Tarr D.A. Inorganic Chemistry (2nd ed., Prentice Hall 1999) p.118 ISBN 0-13-841891-8
• Jolly, William L. (1991). Modern Inorganic Chemistry (2nd ed.). New York: McGraw-Hill. pp. 1–23. ISBN 0-07-112651-1.
External links |
9f6eb71a0c945077 | A Scientific View of the God Delusion and it’s Implications
Written by July 26, 2009 7:03 pm 107 comments
(March-June 2008)picture-1
This article is an outcome of an email discussion I had with members of my extended family. I initiated this discussion soon after my first grandchild was born. The fact that I had become a grandfather had a strong effect on me. I had been instantly catapulted to the next senior generation. I started wondering about what is the best gift I can give to her and to other yet-to-be-born children in our extended family. I could think of nothing better than the creation of conditions in the family in which a child can grow to become an independent thinker, unencumbered by the views her/his parents or teachers may hold.
Credulity in a child is an evolutionary necessity. It suits the child as well as the parents. But every child has a right to be exposed to all streams of thought before making a choice. This article presents the scientific viewpoint. There is no dearth of opportunities for children to hear the opposite viewpoints!
The photograph at the end of this article was taken by Prof. Claude Boulesteix of the University of Aix-Marseille, France, in 1991. All other photographs were taken by me.
1.1 In science there is no place for any unquestionable authority. Only logical and verifiable propositions are relevant. Einstein was a brilliant scientist, and we humans can take pride in the fact that we belong to the same species as he. But his views on quantum mechanics were wrong, and he was shown his place on that issue. So we should never quote the scriptures or any ‘wise’ or ‘noble’ person when we want to argue about some FACT. Facts are established by evidence, not by opinion or preferences or desirability.
1.2 Intuition and inspired guesses, even traditional empirical information and folklore, are fine when it comes to building up a model for explaining a set of data, but the real test of that model will always have to be hard-core and repeatedly verifiable evidence.
1.3 We shall certainly discuss morality, the public good, and the desirability of a sense of service to others. But later. Let us get the hard facts first. As Mark Twain said: ‘Get your facts first. Then you can distort them as much as you please!’
1.4 The first thing to note is that, by adopting a strictly logical, honest, and objective approach to data, humanity has been able to achieve so much. To appreciate this properly, and to take pride in our scientific heritage, we should understand the basics of this approach. In particular, we must admire the indomitable human spirit which, in spite of the hostile conditions in which it had to progress, came up on top by adopting THE SCIENTIFIC METHOD of interpreting natural phenomena.
1.5 ‘Science is the process that takes us from confusion to understanding in a manner that’s precise, predictive and reliable – a transformation, for those lucky enough to experience it, that is empowering and emotional’ (Brian Greene).
1.6 There can be no place for reverence for authority in the scientific method. Just imagine, if we humans had taken Einstein’s word on quantum mechanics seriously (overawed by his giant intellect), the progress of science and technology would have been pushed back by several decades.
1.7 If you are not familiar with the basics of information theory, you may have a mental block about some of the statements below, but I shall try to explain. You all have an intuitive idea of what we mean by ‘information’. It can be measured in terms of strings of 0s and 1s (bits).
1.8 Entropy is a measure of disorder. It is thus just the opposite of information. Information means knowledge, and entropy or disorder is a measure of absence of knowledge. Thus ‘negative entropy’ and information have similar connotations.
1.9 In science the term ‘complexity’ has a technical meaning. In particular, it is not the same thing as complicatedness. The ‘degree of complexity’ of a system can be viewed as the amount of information needed to describe the structure and function of that system. A living organism is far more complex than, say, a crystal of common salt. The amount of information needed to describe the structure of a crystal of common salt is not much compared to the degree of complexity of a living organism.
1.10 Energy drives all change. Energy is the engine of evolution. Our Earth (an ‘open’ system) receives most of its energy from the Sun, and the Sun produces it by thermonuclear reactions (conversion of mass into energy).
1.11 The influx of solar energy into our ecosphere drives it away from equilibrium. Any system away from equilibrium will naturally tend to move picture-2back to equilibrium and (concomitantly) towards a state of higher entropy (as dictated by the second law of thermodynamics). Thus a pushing of a system towards a state of disequilibrium (by solar energy in our case) can be thought of as an influx of ‘negative entropy’. And remember, negative entropy means information.
1.12 Thus what the Sun has been doing all the time is to increase the information content of Mother Earth. This perpetual increase of information content is what drives evolution of various kinds. Evolution is not only biological; it can also be chemical, or even cultural.
1.13 The basic concept of biological evolution (higher chances of survival and propagation of the fittest; and adaptation and evolution of species (even emergence of new species) by the consequent processes of cumulative natural selection) was introduced by Charles Darwin over 150 years ago. His basic idea has stood the test of time (in spite of all the vicious attacks by vested interests). In fact, there is even a flourishing new subject called ‘artificial evolution’. In it, you program your computer in terms of notions very similar to Darwinian or Lamarckian evolution, and use it to solve a huge variety of highly complex scientific and technical problems. The evolution of problem-solving capabilities in intelligent robots is also achieved by this remarkably powerful approach. And the best is yet to come!
1.14 Chemical evolution preceded biological evolution. Molecules of increasing complexity (or information content) evolved with the passage of time. In due course metabolism and self-replication properties appeared (either together or separately), and the emergence of ‘life’ was simply inevitable. Life just had to appear in the conditions prevailing on Earth, and, after it had appeared, biological evolution did the rest. There is nothing miraculous about that. Thus, the so-called ‘creation’ of life is a non-issue in science, whereas theologians make a huge issue out of it. Cool. Just chill!
1.15 And now about the God concept. The universe has a huge amount of information content, or complexity. How did the universe get created? Suppose you say that God created it. Now I appeal to your common sense and ask a question: If God created the universe, how did God get that information-content and complexity which must be at least equal to the information content of the universe? Anything simple or complex cannot have the capability to create something more complex than itself. So the God concept is no help whatsoever (it is redundant), so far as explaining the existence of the complex universe is concerned. Come with something else. Or simply say that we do not yet have certain answers.
1.16 But we still want a God up there, for emotional and ‘moral’ reasons, and for feeling secure in this utterly hostile set of natural conditions, right? Let us not mix objectivity with desirability. We can discuss these things separately, and we shall certainly do so below.
2.1 Since there is no sensible God concept that I can take seriously, I have to manage without it.picture-4
2.2 As of now, life is known to exist only on Earth. And in this life chain, we humans have evolved to be at the top. This means that in the present scheme of things in Nature, we occupy a highly privileged position. We can feel a great sense of pride in that, but with privileges come responsibilities. Mother Earth is our collective responsibility. There is no ‘God’ around who can be depended on to take care of our habitat by his benign intervention, in spite of our follies. ‘Whatever is done is done by man and judged by man’ (Maxim Gorky).
2.3 My life can survive only in a narrow range of temperatures and pressures. It is extremely vulnerable and fragile. This is bound to give me a sense of insecurity, and a yearning for a father-figure I can turn to for solace and reassurance. Unfortunately, that wish cannot be fulfilled, no matter how desperate I am about it. Therefore I have no choice but to be a brave, rational, and responsible citizen of the world I live in.
2.4 I take genuine pride in the fact that my ancestors developed the scientific method of interpreting information. I accept nothing without evidence. This gives me a great sense of liberation and power. Elitism? Yes. And why not? All the accumulated scientific knowledge that humanity possesses is verifiable knowledge, and my proud heritage. And yet I have no sense of attachment to it. If tomorrow new evidence is found, which demands a change in the way I look at Nature, I shall have no trouble abandoning even my pet theories. This is intellectual humility, and in sharp contrast to what happens in theology. You are not permitted to question certain statements there. How stultifying that must be for the intellect. Such an approach can kill the spirit of free enquiry, and deny the pleasure of discovery. I am glad that I do not suffer from that terrible handicap. Come join the elite club.
2.5 Selfishness and a sense of self-preservation is built into my evolutionary history, and therefore into my genes. But it is not individual selfishness necessarily. My brain has evolved to a state where I understand the benefits of collective self-interest.
2.6 I am a good and charitable person because it feels good to be so. If I am good to others, it is beneficial for my mental health. If I am good to others, I am being a responsible world citizen. I pity a person who is good only because of the fear of punishment/retribution by an
imaginary ‘God’ for bad actions. My morality comes from within, because it is sensible to be moral and ethical. Being a moral person feels good. Why should I be moral and upright only because I am a ‘God-fearing’ person? And what is God anyway?
2.7 Since Mother Earth is my responsibility, I should do nothing that harms the ecosphere unnecessarily. That is a matter of simple self-interest (collective self-interest). Just look at the pollution caused by Hindus with all the burning they do in their havans and pujaas. Mindless burning of picture-5precious resources is a crime, and it is happening because of an irrational belief system. Look at their contribution to global warming when they burn their dead. The three ‘Abrahamic’ monotheistic religions, namely Christianity, Islam, and Judaism, are more environment-friendly on that score, but they are worse in many other matters. The depredations of these three religions have been discussed in detail by Richard Dawkins (RD) in his book ‘The God Delusion’ (TGD) (2007).
2.8 I feel sad about the immense damage done by practically all organised religions to Mother Earth and to humanity: wars, terrorism, meaningless rituals and wastage, inter-religious hatred and animosity, atrocities on women and children; the list is very long indeed. ‘Those who can make you believe absurdities can make you commit atrocities’ (Voltaire). ‘With or without religion, good people will do good, and evil people will do evil, but for good people to do evil, that takes religion’ (Steven Weinberg). It is our duty to raise our voice against all irrational acts and thinking.
2.9 Buddhism preaches non-violence and emphasizes the importance of service to others. It is also quite Godless; that is why it was hounded out of India by the Vedic Hindus of that time.
2.10 Many people create a God because they want one. Their upbringing has been such that they would have withdrawal symptoms if their God were taken away or demolished by logical and responsible reasoning. In fact, they exhibit arrogant or even violent behaviour when this happens. Does that ring a bell? The symptoms are the same as those of drug addicts. ‘Religion is the opium of the masses.’ The C.M. of West Bengal cannot give up smoking because he cannot cope with the withdrawal symptoms. But can that justify his addiction? No addiction can be justified. I feel good about the fact that I do not suffer from God-addiction.
2.11 Free from the God-created-everything syndrome, I can indulge in a great sense of wonder at the way complexity has evolved in Nature, starting from simple inanimate matter. The pictures I have inserted in this write-up are some examples of that, and there is nothing ‘Godly’ about their beauty. There is a great sense of accomplishment when I or any of my fellow humans discovers one more ‘secret’ of Nature. And I keep thanking the scientific method for this, which is a great accomplishment of the human intellect. I should do nothing to insult the scientific spirit and the scientific method. And I am grateful for the ever-mounting fallouts of this method of discovering the secrets of Nature. I am proud of the scientific and technological heritage of humankind, a triumph of the human mind, particularly the collective human psyche (leaving out the irrational believers, of course).
3.1 Interactions or forces operative between any two or more objects have to be from one or more of the following:picture-6
• The electromagnetic interaction.
• The gravitational interaction.
• The nuclear interaction.
• The electro-weak interaction.
No other interactions or forces are known to us at present.
3.2 No object can move with a speed greater than that of light (Einstein again).
3.3 The past is dead, and the future cannot be predicted. Therefore, all astrology is utter nonsense, as also numerology and all that.
3.4 No macroscopic object can be at two different places at the same time.
3.5 If you take seriously some of the claims made by yogis, babas etc. (regarding clairvoyance, premonitions, predictions, dreams coming true, and all that), you have to postulate the existence of at least one more interaction (in addition to the four mentioned above), with mutually contradictory properties, and in clear violation of the known laws of science. Science does not have all the answers, but we are trying to get more and more answers. If anybody can establish the existence of this completely crazy-looking interaction I just mentioned, he/she will surely be honoured with a Nobel Prize, and may become more famous than Einstein. Science, of course, always welcomes new knowledge and insights.
3.6 Brain science is a very challenging science, and there is a lot we do not understand at present. But we are trying. There are various views on the meanings of dreams, if at all there are meanings. The feel-good factor, as also the feel-bad factor, plays huge tricks on the brain. We tend to remember what we like or cherish, and tend to forget or ignore what we do not like or do not find interesting. Our upbringing and mental conditioning since childhood has a major role to play in this.
3.7 We all want to feel important. What can feel better than being close to ‘God, the almighty’, even an imaginary God?! But it is nothing more than a self-imposed delusion, the God delusion. Just make-believe.
3.8 Some of the great names among the classical psychologists are: Freud, Jung, and Adler. Adler built on the idea that much of our frustration and mental disorders come when we cannot have control over situations or domination over others. People go to extraordinary lengths to achieve this control. In the case picture-7of ascetics, this aggression is turned inwards, and they try to control their bodies and thoughts. It makes them feel good, and in control. A stage comes in their penance and meditation when their brain starts imagining things; they interpret it as ‘divine revelation’, ‘flashes of insight’, and what not.
3.9 Being of service to others certainly rebounds on you in various ways, and you are always a gainer in the long run. The ‘spiritual’ leaders, knowingly or unknowingly, do things which often amount to charity and social service, but there is an additional bonus for their ego: They exercise huge control over the minds of large numbers of people. Adler again.
3.10 Ascetics and ‘spiritual’ leaders are called ‘holy’ men or women, whatever that term means. A nonscientific ascetic does little more than torture himself, apart from influencing others with his irrational and therefore false beliefs. A scientist, on the other hand, improves the quality of our physical, mental, and cultural life by his discoveries and inventions, by strictly following the tenets of the scientific method. Who is the ‘holier’ of the two: the ascetic or the scientist? Who is more deserving of our gratitude and reverence?
4.1 Blame it on the upbringing of children. Parents impose their beliefs on their little children. This is not fair. Every child has a right to be exposed to all streams of thought. In particular, it is our duty to ensure that we do not shield our children from the scientific approach to things. We want our children to grow into fearless truth-seeking individuals, no matter how harsh the truth may be. The whole truth, and nothing but the truth. We do not want that any of them should move around in life like a zombie, repeating certain statements parrot-like, without pausing to think about their veracity or logic.
4.2 Some of the scientific arguments and theories are not for the intellectually meek. By contrast, it does not require any intelligence to have blind faith in something. But even a moderately intelligent child can develop a scientific outlook on life if brought up in an atmosphere in which all types of questions are encouraged, and no idea is treated as unchallengeable or taboo.
4.3 It is necessary to have a basic understanding of statistical theory for a correct interpretation of many of the coincidences, ‘premonitions’, ‘miracles’, etc. Unfortunately, even among the trained scientists there are many who lack this understanding. ‘Statistical significance’ and ‘level of confidence’ are technical terms. How many educated persons actually bother to think in terms of these parameters when they come across ‘miracles’, ‘strange’ coincidences, dream-realisations, etc.? Not many. This happens because they have been brainwashed
4.4 It is worth repeating and emphasizing that a high degree of intellectual prowess is not a necessity for a child to develop a rational view of things, provided he/she grows up in an environment of rationality and free enquiry. This is a birthright of your children. Do not deny it to them. Be a reasonable and responsible parent, who sets a good example for his/her children by having an open mind on every issue, including the ‘God’ issue. Parents do want to give good sanskars to their children. They usually do this by their own example. Give your children the sanskar that they should not be afraid of facing the truth. In fact, they should have a proactive approach, whereby they go seeking the objective truth, and not just sermons of ‘wise’ people or pronouncements in ‘sacred’ texts. ‘Mere scholarship will not help you to attain the goal. Meditate. Realize. Be free‘ (Swami Sivananda; emphasis added).
4.5 To the young generation I want to say this: It is nice to see how ‘cool’ you can be regarding all the ‘in’ things and the latest trends. Show me how cool you are capable of being when it comes to knowing the basics of what science is all about, and why is it that the scientific method has been so remarkably successful in engendering so many achievements of the human intellect. Should you not be curious about that? How about showing off your knowledge in that area also?
4.6 The scientific method is not the exclusive possession of scientists. The scientific method of interpreting information is the crowning glory of the collective human intellect, and is available to all of us for applying in our day-to-day lives. Don’t miss out on it. A whole new world of good science is waiting for you to explore and wonder about. There is poetry in good science. And deep philosophy too. Rational philosophy. Scientists seek truth, and have the ever-present humility to admit their mistakes in science. What can be nobler than that? How about joining their ranks, at least as informed members of the public? That would be really cool! No?
A: As I was sitting in my chair,
I knew the bottom wasn’t there.
Nor legs nor back, but I just sat,
Ignoring little things like that.
(William Hughes Mearns)
Q: What is prayer?
A: Wrong. This issue has been discussed in great detail in the very first chapter of the book ‘The God Delusion’ by Dawkins (2007). Einstein made this remark in the context of his opposition to quantum mechanics as formulated at that time. Recently a letter written by Einstein in January 1954 (just one year before his death) was auctioned for $400,000. Here is an excerpt from that letter: ‘. . the word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable but still primitive legends which are nevertheless pretty childish’.
Dr. V. K. Wadhawan is the Raja Ramanna Fellow at BARC (DAE), Mumbai and the Associate Editor of PHASE TRANSITIONS. He is also the Ex-Head, Laser Materials Division at the Centre for Advanced Technology (DAE), Indore.
This post was written by:
- who has written 36 posts on Nirmukta.
• Hello sir!
This was an absolutely impressive article. It seemed like a summary of the cumulative intelligence of all mankind passed on through the generations, except the particulars! And, the article indeed is! So, congratulations and thanks!
I’ve a vague qualitative idea of second law of thermodynamics, and felt that when sunlight hits the Earth, a heat energy gets dissipated in the atmosphere (conduction, convection and radiation), which should rather increase the entropy (disorder). Kindly explain how my interpretation is flawed in this regard.
I’m an atheist, and have drawn almost identical conclusions about life, humans and the Universe in general, as have been very eloquently laid down by you in this particular article, so incidentally, I’ve dealt with the same individual issues in many of my articles. Would be glad and honored if you find time and I could have your views on the said issues:
1. A link to a very good article that had spawned an elaborate debated on interference of religion with human development: here (click)
2. A blog post dealing with how possibly prominent religious leaders are manipulating the public opinion, and are possibly themselves atheists(!): here (click)
3. A fantasy-based post on possible incentives to live in a simulated world. The latter comments also dealt with natural factors favoring survival of life on the Earth, including thermodynamic considerations: here (click)
4. Two posts related to morality–first, dealing with my personal bases of morality, and second, with how morality based solely on authority (religious, communist, dictatorial, etc.) could be flawed: here (click) and here (click)
5. A post dealing with implications of uncertainty principle on human free will: here (click)
Congratulations, again!
• Dr. V.K. Wadhawan
In thermodynamics, any system must belong to one of three possible categories: isolated, closed, or open.
An isolated system is one which cannot exchange energy (e.g. heat) or matter with the surroundings. For such a system, the second law of thermodynamics says that its entropy can never decrease.
A closed system can exchange energy, but not matter, with the surroundings. For dealing with the entropy question for such a system, the relevant quantity is the free energy F (= E – TS). Here E is the internal energy of the system, T is its temperature, and S the entropy. The version of the second law applicable to such systems now deals with F, rather than S. The law says that F can never increase for a closed system. But you see from the definition of F that entropy S can indeed decrease in this case, provided the decrease in E is by a larger magnitude that the decrease in TS. This ‘loophole’(!) in the second law is something we humans exploit in a variety of technological applications. Suppose I want to grow a crystal of common salt (NaCl). I shall first prepare its saturated solution. Note that the entropy (a measure of disorder) of NaCl in solution form is higher than in the crystal (a crystal has a highly ordered or low-entropy atomic structure). So why should the crystal grow at all? It happens because I create a closed system, rather than an isolated system, by putting the sealed beaker containing the aqueous solution of NaCl in a water bath for which I can control the temperature accurately. I start at a temperature, say, 50oC; of course, my solution of NaCl must have a saturation level corresponding to 50oC. Now I cool the water bath slowly. Naturally, at any temperature below 50oC, the solution has more NaCl than the saturation level for that lower temperature. This excess amount of NaCl crystallizes out from the solution. More and more NaCl leaves the solution as I continue to cool the bath, resulting in a bigger and bigger crystal. Now, a crystal not only has more order and less entropy, it also has a large amount of binding energy (we have denoted it by E). It so happens that, in this example, the E term dominates over the TS term, resulting in a lowering of the free energy F for the crystal. As the atoms were attaching themselves to the growing crystal, some heat was liberated (because of the binding of free atoms to the crystal), and this heat was dissipated (through conduction and convection) to the water bath surrounding the beaker. Also note that, the beaker plus the bath can be taken as an isolated system, and for this the entropy has indeed increased. The second law is never violated in any situation whatsoever.
An open system (like our Earth) exchanges both heat and matter with the surroundings. In particular, it receives solar energy. Some of it is intercepted by the green vegetation. It gets trapped in the form of chemical energy in leaves. As you know, such reactions are mediated by chlorophyll. This trapped chemical energy is ‘food’, eaten by animals. This trapping of energy is another example of LOCAL decrease of entropy, even though the total entropy in an appropriately defined isolated system is never decreasing.
Our Earth is a ‘complex system’. I am currently writing a book on complexity (my third book). There is a relationship between entropy and information. I have explained this in an article written for students. Please see the August 2009 issue of Resonance: Journal of Science Education.
• Ajita Kamal
Hi Ketan,
I’m glad you’re questioning the idea of ‘free-will’. I have studied and written about subject for a while now. This is a topic that has been studied extensively by philosophers but has been ignored steadfastly by scientists. The semantics are dicey and most philosophers are compatibilists. I suggest you read what Tom Clark at the Center for Naturalism has written on the subject: http://centerfornaturalism.blogspot.com/2009/07/freedom-from-free-will.html
Hint: Quantum indeterminism is a red herring when it comes to contra causal free-will. It has nothing to do with it. I say ‘contra causal free-will’ because that is the popular notion of free-will that is pervasive in religion and culture. Compatibilists re-define free-will to a point where it is not what most people mean by it (eg, Daniel Dennett). Others, like Susan Blackmore, are steadfast defenders of the idea that free-will is neither real not useful.
Here is Tom Clark’s latest interview on Point of Inquiry: http://www.pointofinquiry.org/tom_clark_scientific_naturalism_and_the_illusion_of_free_will/
If you’re interested in the naturalism movement and in the thoughts of others on the subject of free-will from a naturalistic perspective, you may be interested in joining the applied naturalism forum and the naturalism philosophy forum on yahoo groups. The discussion there is of an exceptionally good quality.
• Vinod K. Wadhawan
I am a physicist by training, and I have been devouring the literature on complexity for the last few years. I am convinced that the vexing question of ‘free will’ will get a proper explanation only when we humans acquire a better understanding of the evolution of complexity in the cosmos. Scientists have certainly been trying to tackle the free-will question via the complexity approach. See, for example, the last chapter of the fifth edition of the book THINKING IN COMPLEXITY: THE COMPUTATIONAL DYNAMICS OF MATTER, MIND, AND MANKIND by Klaus Mainzer (2007).
The occurrence of free will (mainly in humans) violates the causality principle. If reductionism and constructionism are valid ‘isms’, then why is it that I cannot predict what a human being will do during, say, the next one second? Nonvalidity (or, only partial applicability) of reductionism and constructionism is the subject matter of the field of complexity.
It appears that quantum indeterminism is really not the big issue it is made out to be. At least that is the impression I got on reading the book THE END OF CERTAINTY: TIME, CHAOS, AND THE NEW LAWS OF NATURE by the Nobel laureate Ilya Prigogine (1996).
• Ajita Kamal
Dr Wadhawan,
I didn’t mean to imply that all scientists have completely ignored free-will, so I apologize if that’s the way it sounded. The fact is that given the vast influence of this belief in human culture, and that there is no evidence for it’s existence, contra-causal free-will has not been addressed in the scientific literature as much as it should have been. I am aware that a few scientists have addressed the subject.
Most of the scientific work that I know of consists of experiments in neuroscience, so I am grateful for your input on the physics approach. In fact, although all of science can essentially be reduced to physics and mathematics, I have never thought of addressing free-will as a complexity problem. I must read up on it. From a biological perspective, some fascinating new research is coming out, and it is indeed ground breaking for many in the field. For many philosophers of mind, the science is finally catching up.
Here are two excellent recent experiments in the field of neuroscience:
1. http://www.wired.com/science/discoveries/news/2008/04/mind_decision
2. http://www.sciencemag.org/cgi/content/abstract/sci;324/5928/811
Regarding quantum indeterminism, I’m not sure what Prigogine’s analysis is. The prevailing idea among many naturalists I have spoken to about this is that determinism and indeterminism are not how free-will should be approached.They claim (and I must agree) that the way free-will is understood in society is as causal control of one’s choices. It is clear that whether the events that lead to the choices we make are deterministic of not, we have no actual control over them. In other words, free-will does not imply that a omnipotent universe cannot control our mind, but that the sentient self can. Regardless of whether neuronal events are deterministic or not (and I think that evidence for quantum events influencing sentience is lacking) the kind of dualistic thinking that free-will requires nullifies the idea.
I have read people equating indeterminism with randomness and claiming that free-will exists at the quantum level because consciousness involves quantum brain events. Even disregarding the flaw inherent in their re-defining of free-will as a ‘will’ that is beyond the control of the universe, their error in categorizing indeterminism as randomness invalidates their argument. If I understand this right (please correct me if I’m wrong), the probabilities of a quantum event are determined by the probabilities of previous quantum events, and would be accessible to a computer that can compute a wave function that describes the entire universe.
In any case, there is no need to go into this because even if quantum events were totally random, free-will ( the contra-causal kind that is popular in culture and religion) would still be untouched. In fact, the ‘will’ would become even less free if it was being affected by random events!
• Vinod K. Wadhawan
This is heady stuff indeed! I must confess that my understanding of terms like ‘free will’ or ‘consciousness’ is next to nil. I have hardly done any reading on these things, but the interaction with this website has indeed motivated me to do some catching up. The level of discussion is high, and I wish to compliment the Editor, Ajita Kamal, for the commendable job he has been doing. I record here a few helpful facts, basing my response on the complexity way of looking at things.
Causality is not always the most important thing to worry about when one is dealing with a complex system like a human being or his/her brain and ‘mind’. Causality breaks down again and again as the degree of complexity of an open system increases successively. Such systems are far-from-thermodynamic-equilibrium open systems, always exchanging energy and matter with the surroundings. When a system is driven sufficiently away from equilibrium, it undergoes a ‘bifurcation’. The idea of bifurcations was developed by Ilya Prigogine and coworkers. A bifurcation is a more general version of a phase transition, and is a very common phenomenon in nonlinear dynamical systems pushed far away from equilibrium. At a bifurcation point in phase space, the system has two choices (the so-called ‘pitchfork bifurcation’), and the choice actually made is purely a matter of chance (it is rather like whether in a ferromagnetic phase transition, a particular portion of a crystal will opt for a spin-up configuration or a spin-down configuration). Even the most minor of thermal or other fluctuations can push the system to one bifurcation branch or the other. Since the nature of the fluctuation (including a quantum fluctuation) that may happen to occur at the moment of the bifurcation point cannot be predicted, the evolution of complexity becomes unpredictable. And this is true even for fully deterministic systems.
Bifurcations can occur repeatedly if the system is pushed more and more away from equilibrium. The history of our universe is one grand saga of the successive bifurcations that happened to have occurred. A different set of bifurcations would have led to a totally different universe. And many people still want to think in terms of a God!
In my opinion, one of the most important results in the theory of complexity was first arrived at by Chris Langton of the Santa Fe Institute (in 1985 I think), when he came up with the notion of the EDGE OF CHAOS existence of most complex systems. This notion has been confirmed, or independently invented, or reinvented, or strengthened by a number of other workers; for example, Per Bak, Stephen Wolfram, Doyen Farmer, John Holland, Stuart Kauffman, and quite a few others. It is now generally agreed that most complex systems tend to approach a configuration at the edge of chaos, and then tend to stay there. The edge of chaos can be imagined as a thin membrane in phase space which separates chaos on one side from order on the other. Most complex systems are in a state that is neither too ordered, nor too chaotic. Thus there is room for exploration and perpetual novelty, which is not possible when there is either total chaos or excessive order. Chaos is of central importance for understanding complexity.
The word randomness has been mentioned more than once in Ajita’s note. One must make a distinction between chaos and randomness. To the extent that we can ignore noise in a system, classical chaos is fully deterministic. Information is a measure of complexity, and chaos has the largest (but FINITE) degree of complexity. Entropy has been defined in a variety of ways. The one best suited for identifying chaos, and for providing a quantitative measure of it, is the Kolmogorov-Sinai (K-S) entropy. K-S entropy has the property that it requires the computation of sequence probabilities (i.e. probabilities for all the various routes that the system will follow over time). Moreover, it represents a rate. I skip details, and give only some results here. The K-S entropy is zero for all nonchaotic ‘attractors’ (of any period). Such attractors represent systems which do not evolve with time. The K-S entropy for them remains constant with time. No new information is generated or gained over time. By contrast, in the chaotic regime there is continuous evolution with time. At any future time the system can be in a totally unpredictable state, providing a steady supply of new information. That is why the K-S entropy for a chaotic system is found to be some positive constant.
The K-S entropy also illustrates the difference between chaotic data and random data. By ‘random’ we mean that determinism, if any, is practically negligible. The K-S entropy for a random system works out to be INFINITY (at least for uniformly distributed data), unlike the finite but large values it has for chaotic systems. No wonder, I am more comfortable with chaos than with randomness! And chaos theory is quite good for understanding a number of features of complex systems. Chaos is deterministic, and yet unpredictable.
I am quite ignorant about what quantum indeterminism really means. Quantum theory has been tremendously successful in helping us understand so many things in Nature. But do we really understand quantum theory? I quote Richard Feynman: ‘I think that I can safely say that nobody understands quantum mechanics’. Although Einstein had his reservations about the quantum mechanics of his time, his objections were brushed aside. Then we lived with the well-known Copenhagen interpretation of quantum mechanics (supported by Bohr and others), till Hugh Everett came on the scene with his ‘many worlds’ idea. Even this has been replaced by some brilliant new pieces of work. For example, Feynman used his well-known path-integral approach and introduced the idea of ‘parallel histories’ of the universe. Another approach is Murray Gell-Mann’s coarse-graining recipe for interfacing the quantum-mechanical microscopic world with the classical macroscopic world. Even ‘quantum Darwinism’ has been postulated. It appears that the last word has still not been said.
I am attracted by Prigogine’s reformulation of quantum mechanics. He pointed out that the so-called Poincaré resonances occur in both classical and quantum physics. Therefore, as in classical physics, one has to go beyond Hilbert space for formulating a statistical theory applicable to quantum ‘large Poincaré systems’ (LPSs). The Copenhagen interpretation had to be introduced in quantum mechanics in the past because measurement breaks time-symmetry, and is therefore not in conformity with the time-symmetric Schrödinger equation. The LPSs considered by Prigogine already break time-symmetry, and thus blur the distinction between microscopic and macroscopic quantum physics. My hunch is that quantum indeterminism is perhaps not the real thing to chase when grappling with the question of free will.
‘Consciousness’ is at best an ill-defined term. Horgan (1994) defined consciousness as ‘our immediate, subjective awareness of the world and of ourselves.’ Hawkins (2004) put forward his ‘memory and prediction theory’ of human intelligence: ‘The brain uses vast amounts of memory to create a model of the world. Everything we know and have learnt is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence’. The cortex is the portion of the brain wherein most of these actions of remembering and predicting occur. According to Hawkins, ‘consciousness is simply what it feels like to have a cortex’.
• Dr Wadhawan,
Thank you for your detailed reply. I found it very enlightening, especially the part about K-S entropy and chaos. The scientific perspective that you bring to the conversations can help a lot of us understand more about these subjects that we discuss here at Nirmukta.
• R K Khardekar
Dear Ajita,
On Free will, I understood something from reading UPADESH SARAM of RAMAN MAHARISHI.
As you know, the ‘KARMA SIDDHANT’ is primary and basic in Indian Philosophy when they try to understand Human Conditions.
So while in first stanza Raman Maharishi alludes to Ishwar’s command ruling our lives, within third or fourth stanza itself he clarifies the role of Karma too.
Mahirishi poses a question : Karma Kim Param?
and gives a definitive answer : Karma tat Jadam.
Implying that Karma is just a jadam static rule book. It does not bind in any way ‘the Chaitnya’ (which is Human essence as per the Indian Philosophy) .
Rest of the Upadesha Saram is too deep and too involved for a mere scientifically trained person to even attempt to grasp.
From the realm of experiencing directly, the infinite energy that is this universe ( same as the Vaidik GOD of which we humans are integral part), to mere speculation based on observations on manifest material world ( however consistently done) there will always remain a chasm.
Do we have free will? Sure we have it. That is the main thing we have that allows us to choose the Ethical path. Even if ethical path appears counter-intuitive.
Even an athiest congratulates himself on this premise — “I am a good human being and I dont need a GOD to tell me to be good”
To be utterly selfish is simpler in contrast. Even if it means being oblivious to the harm we may cause to any thing other than ourself.
So not for fear of backlash of Karma, that we are good. We are good because we have freewill to resist our built-in instinct of self preservation from over-shooting its mandate. And to become some-thing maginificient that we truly are.
One does not need to be an athiest to realise one’s full potential. Perhaps an athiest is deluding about himself too, after all.
Ravi Khardekar
• Dear Ajita Kamal,
I am a physicist busy doing some interesting applied physics work.
I however have stomach for theoretical physics too.
The cosmologists, and particle physicists are striving very hard to get a hold on to what makes the universe go. The string theories are meticulous way of explaining complex phenomenon associated with each varifiable paarmeter in high energy physics and they have landed with a universe which has eleven dimensions.
With big bang occuring as fluctating colloision between some very high dimensional membranes.
This way they try to get over the fact that big bang as a singularity ( what was before the big bang occurred?) is not philosophically an appealing concept.
The complexity theories are fine, each in their own domain, but the universe itself is the greatest of complexity. Kudos to human courage that we at all try to grapple with these matters.
( It is much simpler to try and organise our economic and political theories to make the world a comfortable place for Flora, Fauna and various human Ethnic diversities. And live happily ever after.
But it is not ordained.
Economists invariably turn dishonest to spring surprises like sub-prime on unsuspecting fellow beings.
They are not worried either about welfare of smallest economic element the ‘consumer-customer’ nor are they worried about ‘finite nature of resources’.
The end result is needless complexity.
So apparantly the economic and political class has infinite amount of free will. The consumer class has none whatsoever.
But this is a different topic …)
As for myself, I am not averse to seeking models and theories in vedantic thought, because the vedantins were even more courageous.
They had little means of interpersonal communications ( nothing like Internet which is akin to a giant collective brain for us the modern human beings.), yet they had some startlingly fresh theories about universe.
I hope we can moderate our agnostic egos a bit to at least appreciate the greatness of our ancestors on this count.
Ravi Khardekar
• Thank you sir, for your patient explanation of the concepts!
So, am I right in infering that in ecosystems (which are practically open systems), the entropy apparently decreases when simpler elements present as atoms and small molecules like oxygen, carbon, nitrogen, hydrogen, etc. are converted to
more complex and organized molecules like glucose, amino acids (the monomers of proteins), etc., entropy is actually decreasing. But the same organisms that accomplish the above also increase entropy in some other way–for instance, breathing out carbon dioxide into the atmosphere (which is my conjecture plus possibly a vague recollection). Also, that this reduction in entropy is sustainable only because the internal energy of the organisms is increasing?
I would like to point out, sir, that I am less than half your age, and there’s a possibility you will find my ideas in the aforementioned posts quite amateurish!
Also, it would not be possible for me to access the ‘Resonance’ journal where I am living currently.
Am very pleased to know that I am having the honor to interact with a published author!
All the best for your upcoming book!
Thanks, again!
Take care.
• V. K. Wadhawan
Dear Ketan:
Our Earth, an open system, has been receiving a large input of information (as Gibbs free energy), mainly from the Sun. Most of this input into the biosphere of the earth dissipates as heat, or is re-emitted back into the cosmos. But a small fraction gets stored as cybernetic information.
The cybernetic information (also called semiotic information) is stored in our ecosphere in the form of simple or complex molecules. Some of the energy-rich simple molecules in which the free energy from the Sun gets stored are: H2S, FeS, H2, phosphate esters, HCN, pyrophosphates, and thioesters. In the history of chemical and biological evolution on Earth, such simple molecules contributed to the evolution of complex molecules characterising life. When food is consumed by a living organism, its processing by the organism builds up a high information content for the organism, even though there is always a net rise in the global entropy (as demanded by the second law of thermodynamics).
The nonbiological versions of this build up of information or complexity are called social progress or cultural evolution.
Biological evolution involves survival and reproduction. Biological evolution was preceded by chemical or molecular evolution. The latter involved ‘autocatalytic’ sets of molecules, which consumed energy-rich molecules (‘food’) to reproduce. These complex molecules evolved by processes involving cumulative natural selection.
The journal Resonance is accessible online, and it is free of cost. Just google on “Resonance magazine”. You can download any article you want.
• Very good article. I enjoyed reading this.
However, I have to comment on one aspect of the article. We need to define “God” before we can debate whether it exists or not. Unlike “Horse”, “God’ has no universal meaning. It means different thing to different people. If you for example define “God” as wind, you can prove easily that God exists. But such as god will have only limited influence over humans, and will not serve the objectives of most humans! On the contrary, if you define God as a “ten headed monster that has the habit of swallowing large celestial objects”, it is tough to prove its existence!. But such a God will attract humans to a large extent!
The fundamental problem is we make a sweeping statement like “God doesn’t exist” and the society brushes you off. Instead, the right approach is to ask people to define “God”. It is the starting point to get people to think. You can then explain to them why their idea doesn’t make sense and the implications.
• Vinod K. Wadhawan
Hello Raj,
The basic problem most people have is: If there is no God, how did everything get created? It is really no use asking such people: ‘If God created everything, how did God get created?’ Their mental conditioning has been such that they do not feel the need to get an answer to this difficult question. I have three suggestions about what we should try to do.
1. Focus on children.
2. Make people aware of THE ANTHROPIC PRINCIPLE.
3. Try to answer the question of how the universe got created out of nothing.
I take the last point first. Our universe is believed to have begun with the Big Bang, 10-15 billion years ago. The singularity at the moment of the Big Bang was of such small spatial dimensions that quantum-mechanical effects in general, and the Heisenberg uncertainty principle in particular, were extremely dominant. There is a viewpoint that the universe arose because the fluctuation in momentum and kinetic energy permitted by the Heisenberg principle (because of the vanishingly small spatial dimensions at the moment of the singularity) was large enough to account for the immense amount of the energy in the universe. Space and time were strongly twisted in the beginning. Space itself exploded, its dynamics explained later by Einstein’s geometrical laws of general relativity.
How can energy be created out of nothing, and how is it continuing to increase as the universe expands? Apart from what I have said above, here is a possible answer, given by Seth Lloyd (2006) in his book PROGRAMMING THE UNIVERSE: ‘Quantum mechanics describes energy in terms of quantum fields, a kind of underlying fabric of the universe, whose weave makes up the elementary particles – photons, electrons, quarks. The energy we see around us, then – in the form of Earth, stars, light, heat – was drawn out of the underlying quantum fields by the expansion of our universe. Gravity is an attractive force that pulls things together. . . As the universe expands (which it continues to do), gravity sucks energy out of the quantum fields. The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction. As the expansion proceeds, more and more positive energy becomes available, in the form of matter and light – compensated for by the negative energy in the attractive force of the gravitational field.’ Lloyd emphasizes the complementary roles of energy and information in the cosmic evolution of complexity: ‘Energy makes physical systems do things. Information tells them what to do.’
I agree with you that we should goad people into making sensible statements like ‘I define God as . . . ’. But the answer you will get most often is: ‘There has to be a force, a power, which created everything. I call that power ‘God’’. Many people find justification for that kind of reasoning from all the ‘miracles of creation’ they see around them which make their life possible and sustainable. The fact is that many of these just-right facts of Nature can be easily explained by the Anthropic Principle. I describe it here in some detail, drawing substantially from Richard Dawkins’ (2007) book THE GOD DELUSION.
The anthropic principle epitomizes the relentless evolution of complexity in Nature, exemplified by the emergence or evolution of humans, who are not only living but also conscious entities with a free will. It is instructive to first consider some terrestrial or planetary manifestations of the principle before taking up a description of the (controversial) ‘strong’ or cosmological version of the principle.
In particle physics and cosmology, we humans have had to introduce ‘best fit’ parameters (fundamental constants) to explain the universe as we see it. Slightly different values for some of the critical parameters would have led to entirely different histories of the cosmos. Why do those parameters have the values they have? The ‘weak’ or ‘terrestrial’ or ‘planetary’ version of the anthropic principle answers this question. This version says that: the parameters and the laws of physics can be taken as fixed; it is simply that we humans have appeared in the universe to ask such questions at a time when the conditions were just right for our life.
This version suffices to explain quite a few ‘coincidences’ related to the fact that the conditions for our evolution and existence on the planet Earth happen to be ‘just right’ for that purpose. Life as we know it exists only on planet Earth. Here is a list of favourable necessary conditions for its existence:
Availability of liquid water is one of the preconditions for our kind of life. Around a typical star like our Sun, there is an optimum zone (popularly called the ‘Goldilocks zone’), neither so hot that water would evaporate, nor so cold that water would freeze, such that planets orbiting in that zone can sustain liquid water. Our Earth is one such planet.
This optimum orbital zone should be circular or nearly circular. Once again, our Earth fulfils that requirement. A highly elliptical orbit would take the planet sometimes too close to the Sun, and sometimes too far, during its cycle. That would result in periods when water either evaporates or freezes. Life as we know it needs liquid water all the time.
The location of the planet Jupiter in our Solar system is such that it acts like a ‘massive gravitational vacuum cleaner’, intercepting asteroids that would have been otherwise lethal to our survival.
Planet Earth has a single relatively large Moon, which serves to stabilize its axis of rotation.
Our Sun is not a binary star. Binary stars can have planets, but their orbits can get messed up in all sorts of ways, entailing unstable or varying conditions, inimical for life to survive and evolve.
Most of the planets of stars in our universe are not in the Goldilocks zones of their parent stars. This is understandable because, as the above list of favourable conditions shows, the probability for this to happen must be very low indeed. BUT HOWSOEVER LOW THIS PROBABILITY IS, IT IS NOT ZERO: THE PROOF IS THAT LIFE DOES INDEED EXIST ON EARTH.
The story of the incredible-looking set of favourable conditions for our existence does not stop here. What we have listed above are just some necessary conditions. They are by no means sufficient conditions also. With all the above conditions available on Earth, another highly improbable set of phenomena occurred, namely the actual origin of life in the existing watery conditions. This origin was a set of highly improbable (but not impossible) set of chemical events, leading to the emergence of a mechanism for heredity. This mechanism came in the form of emergence of some kind of complex genetic molecules like RNA. THIS WAS A HIGHLY IMPROBABLE THING TO HAPPEN, BUT OUR EXISTENCE IMPLIES THAT SUCH AN EVENT, OR A SEQUENCE OF EVENTS, DID INDEED TAKE PLACE. Once life had originated, Darwinian evolution of complexity through natural selection (which is not a highly improbable set of events) did the rest and here we are, discussing such questions.
Like the origin of life, another extremely improbable event (or a set of events) was the emergence of the sophisticated eukaryotic cell (on which the life of humans is based). We can invoke the terrestrial anthropic principle again to say that, no matter how improbable such an event was statistically, it did indeed happen; otherwise we humans would not be there. The occurrence of all such one-off highly improbable events is explained well by the anthropic principle enunciated above.
It is not only that the planet we live on is conducive to our existence; even the universe we live in (with its operative set of laws of physics) is so. The cosmological or ‘strong’ version of the anthropic principle says that our universe is what it is because we humans exist. Had it been different, we would not be here, discussing the anthropic principle.
The chemical elements needed for life were forged in stars, and then flung far into space through supernova explosions. This required a certain amount of time. Therefore the universe cannot be younger than the lifetime of stars. The universe cannot be too old either, because then all the stars would be ‘dead’. Thus, according to the anthropic principle, life can exist only when the universe has just the age that we humans measure it to be, and has just the physical constants that we measure them to be. Nothing ‘divine’ about that.
It has been calculated that if the laws and fundamental constants of our universe had been even slightly different from what they are, life as we know it would not have been possible. Rees, for example, has listed six fundamental constants which together determine the universe as we see it. Their fine-tuned mutual values happen to be such that even a slightly different set of these six numbers would have been inimical to human emergence and existence. Consideration of just one of these constants, namely the strength of what is called in nuclear physics the strong interaction (which determines the binding energies of nuclei), is enough to make the point. It is defined as that fraction of the mass of an atom of hydrogen which is released as energy when hydrogen atoms fuse to form an atom of helium. Its value is ~0.007, which is just right (give or take a small acceptable range) for any known chemistry to exist, and ‘no chemistry’ means ‘no life’. Our chemistry is based on reactions among the 90-odd elements. Hydrogen is the simplest among them, and the first to occur in the periodic table. All the other elements in our universe got synthesised by fusion of hydrogen atoms. This nuclear fusion depends on the strength of the strong or nuclear interaction, and also on the ability of a system to overcome the intense Coulomb repulsion between the fusing nuclei. The creation of intense temperatures is one way of overcoming the Coulomb repulsion. A small star like our Sun has a temperature high enough for the production of only helium from hydrogen. The other elements in the periodic table must have been made in the much hotter interiors of stars larger than our Sun. These big stars may explode as supernovas, sending their contents as stellar dust clouds, which eventually condense, creating new stars and planets, including our own Earth. That is how our Earth came to have the 90-odd elements so crucial to the chemistry of our life. The value 0.007 for the strong interaction determined the upper limit on the mass number of the elements we have here on Earth and elsewhere in our universe. A value of, say, 0.006, would mean that the universe would contain nothing but hydrogen, making impossible any chemistry whatsoever. And if it were too large, say 0.008, all the hydrogen would have disappeared by fusing into heavier elements. No hydrogen would mean no life as we know it; in particular there would be no water without hydrogen.
Similar considerations hold for the other finely-tuned fundamental constants of our universe. It is meaningless to ask why the constants have the values they have. Or, if you insist on getting an answer to that question, here it is: The ‘reason’ why they have the values they have is that we humans exist; that is the essence of the cosmological anthropic principle. We can possibly discuss the values of these fundamental constants only in a universe that is capable of producing us. Our existence therefore ‘explains’ or rationalizes the measured values of these cosmological constants.
I think there is not much point in wasting too much time and energy trying to change the thinking of grown-ups. The only hope for rationalism lies in focusing on children. Catch them young, before their usual indoctrination gets irreversibly embedded in their way of thinking.
• Wadhawan sir,
Thanks again! Just one more doubt. The information content of the Earth and organisms that you have talked of above, is it quantifiable, or is it a qualitative concept?
• Vinod K. Wadhawan
Dear Ketan,
Complexity has many definitions. Naturally, there are more than one ways of quantifying it. Here is a general definition of a complex system: A complex system usually consists of a large number of simple ‘members’, ‘elements’ or ‘agents’, which interact with one another and with the environment, and which have the potential to generate qualitatively new collective behaviour, the manifestations of this behaviour being the ‘spontaneous’ creation of new spatial, temporal, or functional structures. Thus the characteristic feature of complex systems is the emergence of UNEXPECTED or UNPREDICTABLE properties or behaviour.
A simple-minded way of defining the DEGREE OF COMPLEXITY of a system is in terms of the amount of information needed for specifying its structure and function. I cannot go into too many details here. Suffice it to say that information, in terms of which one may try to quantify the degree of complexity of a system, has a certain anthropocentric aspect; it is not independent of ‘who is talking to whom’.
A very important way of defining the degree of complexity has been introduced by Eric Chaisson (2001) in his book COSMIC EVOLUTION: THE RISE OF COMPLEXITY IN NATURE. He emphasizes the importance of a central physical quantity for understanding cosmic evolution, and I outline his arguments here. The physical quantity is FREE- ENERGY RATE DENSITY, or the SPECIFIC FREE ENERGY RATE, denoted by Φ (capital phi). Chaisson emphasizes the fact that ‘energy flow is the principal means whereby all of Nature’s diverse systems naturally generate complexity, some of them evolving to impressive degrees of order characteristic of life and society’. The FLOW refers to a rate of input and output of free energy. If the input rate is zero, a system would sooner or later come to a state of equilibrium, marking an end to the evolution of complexity. Equally importantly, if the output rate is zero, disastrous consequences for the complex system would follow.
The energy per unit time per unit mass (quantifying complexity) has the units of power. Other similar quantities in science are: luminosity-to-mass ratio in astronomy; power density in physics; specific radiation flux in geology; specific metabolic rate in biology; and power-to-mass ratio in engineering. Chaisson estimated the values of this parameter for a variety of systems. The results are amazing, and important. Here are some typical estimated values:
Galaxies (Milky Way) : 0.5 erg s-1 g-1
Stars (Sun) : 2
Planets (Earth) : 75
Plants (biosphere) : 900
Animals (human body) : 20,000
Brains (human cranium) : 150,000
Society (modern culture) : 500,000
The value of the energy rate density for the human brain (150,000 erg s-1 g-1) is particularly interesting, though not unexpected. ‘This large energy density flowing through our heads, mostly to maintain the electrical actions of countless neurons, testifies to the disproportionate amount of worth Nature has invested in brains; occupying 2 percent of our body’s mass yet using nearly 20 percent of its energy intake, our cranium is striking evidence of the superiority, in evolutionary terms, of brain over brawn. Thus, to keep thinking, our heads glow (in the far-infrared) with as much energy as a small lightbulb; when the “light” goes out, we die’ (Chaisson 2001).
As the above estimates show to some extent, complexity in the universe is increasing exponentially, even though there is no physical law which says that complexity must always increase.
• Ajita ma’am,
Thanks for your informative message! Honestly, I’ve not been into rigorous philosophy professionally, and whatever little I know is through my amateur interest.
I’d read long back on Wikipedia about compatibilism vis-a-vis determinism and free will. And my interpretation of compatibilism was this: ‘the idea that free will exists even if all the future states (of neurons and their neurotransmitters) can be predicted from their current states.’ Is that interpretation causate? Somehow, I found the idea absurd.
My idea of free will is this: ‘being able to make choices and decisions, which are independent of pre-existing states in the brain’. I doubt if this kind of free will actually exists, especially if one were to call universe (including neural activity) deterministic. It’s important to remember that all our emotions, decisions, memory are just a result of neurotransmitters crossing over from one neuron to the other through gaps between them known as synapses. This crossing over of neurotransmitters is triggered by and in turn triggers electrical disturbances across the cell membranes called ‘action potential’. If free will has to exist, there has to be an entity, which acts absolutely independently of pre-existing states. This will require it to be super-random–for instance, a neuron NOT initiating an action potential ‘on its own’ despite being ‘prompted’ by another neuron by bombarding it with excitatory neurotransmitters. This does not seem plausible to me at all.
Free will is a subject that is being investigated by neurologists. For instance, scientists have marked out the regions in the brain where the first action potentials are generated when any ‘volitional’ action takes place. And unfortunately, it is said that generation of these action potentials is not a local event, but diffusely spread across multiple neurons. The only problem is neurologists have been using terms like ‘executive’ function rather than ‘free-will’ and ‘volition’ without working in concert with philosophers.
For me, ‘contra causal’ is a very complicated term, but I guessed its meaning to be ‘independent of influences of prevailing factors’, something that I’ve talked of above as ‘independent of pre-existing states’. Is my guess correct?
I’d love to have your views on my blog post above where neural aspects of the issue have been discussed.
Thanks again!
Take care.
• *is my interpretation accurate? (with regard to compatibilism). Sorry, I type my responses through my cell phone, hence the typographic errors.
• R K Khardekar
Dear Dr. Wadhavan,
First thing is that there are many “God” concepts. People refer to God in a vague manner at best, dependent on how much they could absorb of the collective cumulative intelligence on the subject, prevalent in their surroundings, and how much of this could they resist.
I was in PISA last month attending a Gordon Research Conference on Hydrogen Metal Systems. The work of Galilio was against, to some what dangerous extent, the God concept prevalent in his ‘space-Time’. The veracity of his work eventually transcended the God concept then prevalent.
This historical happenning repeated, ( happily, without same sad consequence for scientists), discovery after discovery and invention after invention so much, that Science no longer depends on religious interpretations, for its own existance. Scientists also frown in strong measures if some body tries to draw religious connotations of any particular discovery.
This is the first thing I get to notice, thinking about the subject of God and science.
I will comment on Second thing in second mail and subsequent postings.
Ravi Khardekar
• Ashok Kumar Arora
Dear Sir (s)/ Ma’am (s),
Ladies & Gentelmens
I am extremely sorry that I addressed as Dear Sir (s) only as inadvertently while addressing forgot there are comments / replies by Ma’am (s) also. Further I clcked inadvertently to post comments without edditing the existing may please dicarded for typo errors. inconvenience if any caused is deeply regretted. The corrected detail is as under Please
With Warm Regards
Ashok Kumar Arora
Dear Sir(s)/ Ma’am (s),
Ladies & Gentelmens
It is may be very difficult to understand the typical subject, There is no doubt in the Science & the Scientific values and the day to day help or satisfaction and or communication /information people are getting. The Science & its usage to the cause of humanity and or to bring the distances in world very close, it has helped mankind by providing solutions to many complexities and lot Research has taken place & lot much is on the way, certain new information which is new to the world is being shared by one & all the peak of Science we may not find in our life time as others have come done their duty & left, so as we believe the earlier research & written books or other literature (that was existing & which is being stored) till that time the same is proved contrary by latest research, it is very good even now the research done when shared is debated & accepted by the people after satisfactory logical answers. But at the same time it is excellent to Write books & articles on the existence of God and or on blind belief on the existence of the God.
There is also existing equivally lot much literature ( in Hand written & or Printed) dealing with the existence of the God & to believe the God. The book “The God Dillusion” written by Richard Dankins is the one such book in which lot of compilation has been gone into on the subject of belief or otherwise.
This article “A Scientific View of the God Delusion and it’s Implications” is equivally very very important for the people who themselves want to evaluate the existent of God & to prove his existence like other Scientific researches.
As per existing litratures some say God is ” Sayambhoo” i.e. came by him selves; some say he is “Nirakar”i.e he has no shape; some say ” he is “Aum” i.e. sound pronounced by the recitation of the alphabets “A”,”U” & “M” & sound so as “OM”: some say he is “light”; some say he has no ” Gender”; some say he is “Mother”;
some say he is” Creator”; some say he is ” Jassi Rahee Bhavna Jis Kee, Prabhu Moorat Tin Dekhy Tassee” i.e. as some one desires to see God he is is of that Shape or Type want to see him; some say “we all are God’s Ansh, awake or enlighten your self & you will find & later merge with the God “; so to prove the existence of God reference to the literature is given as he can not be seen but felt only; At the same time some give reference to the feeling of the people compiled in the literature, citing reference, ” Jin Doonda Teen Payeya, ” All those who awakened themselves, found it”; some give reference to the different creations to prove and say,” all this is the creation of one & only one “God” and that not any single person can create these; some say he incarnated came to World Earth of his own under his “Leela”, did his work & left of his own”.
So this article by Dr, V K wadhawan, as rightly said adds to the existing one for the Generation to come, think and evaluate to Believe in “God” or just remain & live their life as they feel.
Congratulation & Best Wishes to all who have participated & in advance to those who shall be participating, in this reply / comments columns
With Warm Regards
Ashok Kumar Arora
• Eric Coulshed
God has proven God’s existence to me through a lifetime of experience (47 years). God has answered prayers, shown me future events which have taken place (God is not constrained by time), and on occassion spoken to me directly with statements like ‘Find out what love is’ and ‘for every decision there is a consequence’. God has shown me that God is the ‘One and only God’ through Hindu, Christian and Bhuddist religions (and others). God is in effect saying to you now through me that ‘I exist’ and is using an awakened child to tell you. God is beyond understanding and to be honest has to be experienced rather than put on a laboratory table and analysed. I know who I am and I hope that one day you find out who you are.
God bless you.
• R K Khardekar
Dear Eric,
Its wonderful reading your mail. The way you have expressed your experience.
I disagree with the premise of Dr. Wadhawan that there is a ‘God Delusion’ passing off as ‘God’ in this universe.
I do not know how neuro-psychologist define ” delusion”. Some people think that ‘Dopamine in the brain’ creates “God Delusion”, but that is too elementary an observation and lacks very rigour which “scientific community” is so proud of.
If through injury to brain or a trauma etc. some unfortunate persons delude (about any thing) it should be corrected. Such persons sould receive best medical care available.
But, if by elaborate training to remove negativity and by filling the ‘heart’ with love , brain functioning is refined to the extent that neurons in brain begin to fire in hitherto unknown harmoniuos pattern, then the reality that is experienced will be a different reality.
Uncommon with most other persons but it will still be a reality. Injured brains may delude, but healthy brains performing to their designed maximum efficiency create new realities.
We can pity the persons who want to deny any thing more than an ordinary existensce, to themselves very very diligently.
As it is aptly said:
If we always do what we always did,
then we shall always get what we always got.
wishing enlightenment to all
– Ravi Khardekar
• Eric Coulshed
Sorry for the late reply. The conversations on this web site seem to go on and on. Too much grasping at the ‘truth’ rather than accepting the Truth.
Apologies to Buddhists for not using my spell checker.
Blessings to all.
So Bakhtiyar Khilji was a Hindu?
More lack of courage to call a spade a spade and attribute to Islam its atrocities. Such intellectual dishonesty =/= nirmukta.
‘In 1193, the Nalanda University was sacked by Bakhtiyar Khalji, a Turk;[22] this event is seen by scholars as a late milestone in the decline of Buddhism in India. Khilji is said to have asked if there was a copy of the Koran at Nalanda before he sacked it. The Persian historian Minhaj-i-Siraj, in his chronicle the Tabaquat-I-Nasiri, reported that thousands of monks were burned alive and thousands beheaded as Khilji tried his best to uproot Buddhism and plant Islam by the sword;[23] the burning of the library continued for several months and “smoke from the burning manuscripts hung for days like a dark pall over the low hills.”‘
• Vinod K. Wadhawan
I agree with you entirely about the Bakhtiyar Khilji part of your statement. The Nalanda University episode was a terrible loss to all of humanity, and the persons responsible for it deserve our highest condemnation. But are you saying that the Hindus of those days played no role whatsoever in the near-ouster of Buddhism from India? Not that I care much about any form of organised religion.
Incidentally, the Dalai Lama has said recently that if there is a clash between science and Tibetan Buddhism, he would let science have the right of way (or some statement to that effect). I think rationilists should welcome such sentiments.
• Dear Dr. Wadhawan,
Sprituality is about ‘Cutting the Crap’ and ‘realising’ the essential nature of universe with our total being.
Total being is the super-set of logical thinking, inspirational thought and will.
It is also marked by the lack of intellectual ego, and hollow verbosity.
Budhha taught the silent meditation (free of verbosity) again.
Many Budhhist spend a life time following the technique.
The results however must have been mixed.
The tibetan lamas can sleep in coldest winter of Lhasa on stone floor without an extra bedsheet.
Chinese engineers, on the other hand built the Shanghai Lhasa all weather road using extreme science.
Both these examples are chronicled on Discovery Channel.
Indians in general have not shown propensity for Either.
Coming back to Budhha, while ‘HE’ was free of verbosity, Budhhism is fully and totally verbose.
While he was against named idols, Budhhism created biggest of Idols of Budhha himself .
The iconoclasts of muslim origins were breaking ‘buta’ in hundreds which was ‘up-bhransha’ of Budhha.
What you seemed to be battling, in ‘God Free Universe Paradigm’, is this propensity of human nature to unwittingly do the same which they strongly oppose.
More things change more they are the same.
A scientific philosopher is more often guilty of ’round-about’ arguments then a pure philosopher.
On the other hand Aadi Shankaracharya is a logician par excellence.
But one can benefit from Shankaracharya only with an unbiased mind, or if one has exhausted him/her self going in circles with one’s own logics.
Did Shankaracharya single handedly and non violently defeat the budhhist logicians, to re establish vedic thought?
Did Budhhist force earlier vedantins like Kumaril Bhatt to burn himself painfully over weeks in a large pyre of rice husk that was set affire from the periphery? Apparently after he lost arguments in a Shastrarth with Budhhist scholars?
The crowd of Budhhist scholars infact ‘conned’ in to a ‘Shastrarth’ greatly out-numbered vedantins and shouted ” Maya Jitam’, ‘Maya Jitam’ after every argument.
Booing and forcing the vedantins to the painful death as above.
How and when the Ahimsa of Budhha, turn in to meat-eating and alcohol-drinking rite of the Budhhist cult, apart from many more things?
The story is far more complex, and an unbiased reading alone can get us some answers.
More over, all the above should have absolutely no bearing on establishing the superiority of Science over other human failings like religion.
The religion of science being superior in every other way to every other thing, should it not be self victorious? Without reference to any thing else?
• Can somebody provide good references that study how Buddhism and Hinduism evolved together until Buddhism virtually disappeared from North India? What was the nature of their relationship from 6th century BC to 12th century? What role did Hindus play in driving out Buddhism and why did they do it after centuries of living together?
• Dear Krshna,
It happened over a very long period. Budhhism grew slowly till the time of Ashoka. Who embraced it out of sheer remorse over killing millions in kalinga war.
Afterwards what happened is not chronicled in great details. Vedantins were too proud to record their humiliation. But they lost ground politically. Budhhist on the otherhand grew more in influence and in later years more unethical, manipulating debates and outnumbering the vedantin scholars.
Aadi Shankaracharya, just a teenager, took on them and won back the arguments and the honour to vedas. But budhhist were not the only foes. Kapaliks, tantriks and all kind of deviants were other sampradayas which were leading the religion astray.
The more intellectually complete a religion, more rapidly it decays. Simpler thoughts based religions do not answer much, and do not pretend much about welfare or whatsoever, but appear to be invincible.
Revival of any science or a greater thought can only happen when the truly knowledgeable proponents take up the cause. Like Shankaracharya who was not a mere intellectual. He was an accomplished spiritualist. A poet and a logician like of whom never happened before or after him. And he very strictly adhered to vedas and shastras.
You can read Biography of Adi Shankaracharya from Ram Krishna Mission books and also a must reading is “Mind of Adi Shankara” by Y. keshava Menon, Jaico publishers.
What kind of role – killing Buddhist monks and destroying their viharas, or engaging in debates? I see no problem with the latter.
Did Buddhism play no role in pushing back on Hinduism? Obviously it did – that’s why we had Buddhist icons in present-day Afghanistan (some destroyed not so long ago) and further east, when at one time there were none, and Buddhism mushroomed in northern India. Obviously, it replaced some existing belief, so why not apply your logic that you’re applying to Hinduism’s role in near-ouster of Buddhism, to Buddhism’s role? Could it be that there’s some anti-Hindu bias in your view?
You may also be ignoring the internal decay in Buddhism and its practices, which played a not insignificant role in its ouster – all of that has been chronicled by Dr. Ambedkar – please check out his writings on the subject.
So, good for him. And what does that have anything to do with your post? Are you a Tibetan Buddhist?
1. Is rationalism the latest religion/cult? What are the tenets of this new religion – to be anti-Hindu?
2. What kind of science are you talking about? The kind which promotes drugs for profits while ignoring the side-effects?
There’s science, and then there’s “science”.
• This is my experience that rationalists want aruguments, not resolution of arguments.
That is the most irrational thing that rationalists do.
The “irrational” on the other hand is not necessarily “wrong”. It is just that we have not been able to work out the rationale behind it.
Same with complexity. If you are not able to work out the correct principle, any phenomenon can be complex.
Before Kepler gave his law, after analysing the data of Tyco Brahe, greek philosophers went mad inscibing cubes in spheres and making most complex structures to understand the planetary motion.
Kepler reduced the complexity to the elegant simplicity in one shot.
• Complexity Some views:
• “The Musalman invaders sacked the Buddhist Universities of Nalanda, Vikramshila, Jagaddala, Odantapuri to name only a few. They raised to the ground Buddhist monasteries with which the country was studded. The monks flew away in thousands to Nepal, Tibet and other places outside India. A very large number were killed outright by the Muslim commanders. How the Buddhist priesthood perished by the sword of the Muslim invaders has been recorded by the Muslim historians themselves. Summarising the evidence relating to the slaughter of the Buddhist monks perpetrated by the Musalman General in the course of his invasion of Bihar in 1197 AD, Mr. Vincent Smith says, ‘….Great quantities of plunder were obtained, and the slaughter of the “shaven headed Brahmans”, that is to say the Buddhist monks, was so thoroughly completed, that when the victor sought for someone capable of explaining the contents of the books in the libraries of the monasteries, not a living man could be found who was able to read them.’ It was discovered, we are told, that the whole of that fortress and city was a college, and in the Hindi tongue they call a college Bihar.’
Vinod-ji, would you like to take a guess as to who wrote the above words? No, not an Islamophobe saffronite or a fascist Hindu. Try again.
Here’s a hint: he is the architect of the Indian Constitution, and the words are from “The decline and fall of Buddhism,” Dr. Babasaheb Ambedkar: Writings and Speeches, Volume III (pp. 229-38).
• That is a terrible loss indeed. What are the subjects discussed in the books that were destroyed? Was the knowledge totally lost forever? Since some of the monks moved to other places, part of it might have been saved.
• Vinod K. Wadhawan
I appreciate Kaffir’s effort at highlighting an ugly chapter in Indian history. The question is: What can ordinary people like us do to minimize the chances of such terrible things happening again? My answer is: Rationalism. We should do all we can to spread rationalism.
• “Note that as per the original records kept in kanchi, badrinath, puri, etc. Adi Shankaracharya’s date is considered 509 BCE and not 788 CE or or 800 CE. The British have played havoc with our dates. Rest of the article has presented a nice summary.”
There are many web pages on Adi Shankaracharya. He was a historical figure who revived vedantic thought which was perishing under many onslaughts, including Budhhism.
The happenings of ~ 1200 AD were of different genre. This happened again and again. Vijayanagaram was supposed to be bigger than Rome. Nadir Shah struck in Delhi. Most people who could have intelligent thoughts on any subject were annihilated.
The remaining intelligentia in India, have been reading opinions of British historians and hold them as ultimate truth.
• Hi,
I am thrilled to know that an indian academician has ‘come out of the closet’ to express his sincere views about the supernatural. I have been always worried by the religious apologetics which runs through most of the indian scientific circles. Surely india needs more of this. By the way a hearty congratulations for becoming a grandparent!
• Vinod K. Wadhawan
Thanks Vaibhav. After publishing this piece, I have been writing a series of articles ‘Complexity Explained’ on the advice of Ajita Kamal, the Editor of this website. I want to share with you and others some interesting information. I informed many of my scientific colleagues about these articles. Their response has been very positive, but only through private emails. Except for Dr. S. K. Gupta (and also Mr. Khardekar), none of them has thought it prudent to be publicly seen in the company of rationalists by posting comments on this website. Perhaps I am over-generalizing, and shall be happy to be proved wrong. Of course, there is another tongue-in-cheek explanation: Many of them may be playing it safe, just in case there is a God!
• Pascal’s Wager exactly points out the shallowness of religious beliefs. But it does send a negative message to the public when senior scientists attend religious functions. They might be non-believers privately but its the kind of tip-toeing around the problem and going with the flow which is unsettling and ironic. Plus the Indian government, may it be BJP or Congress lead, heavily endorses hinduism. The point I am making is that the layman can be pulled towards rational thinking more easily if we didnt have so many educated religious apologists who are public figures too.
• R K Khardekar
I am not a religious apologist. But I am also not convinced that non believers and/or rationalists are very ethical too. A non believer need not be a rationalist, like wise a rationalist need not be a person who really means well by the society.
The global warming surely is not the handiwork of people who believe in God’s existence.
The encroachment on river deltas for residential purposes and for agriculture that saw the terrible floods last weekend is handiwork of which group? I mean to say are you not overemphasizing the need for rationality under the premise that 1. those who believe in God are illogical, 2. moment people stop believing in God, the problems of human beings will be solved.
Does not science know enough about safe methods of manufacturing, better methods of city planning, methods of sustainable and energy efficient transportation?
Are there no known decent ways of conducting commerce and provide for all not only the employment, but meaningful leisure, sensible entertainment and graceful retirement?
Is our belief in God coming in the way every time? Is only thing layman missing, is the magical wand of rationalism which is free from God delusion?
What we need actually is the rationalism which ‘solves problems’ without the inevitable ‘entanglements’.
Belief in God or in Science is a secondary issue. Infact if for all our scientific advancement, we are not able to solve simple problems of day to day living ( generation of meaningful employment, equitable or even merit based reallocation of resources , simple and effective administration of health care, the list is not very big but few more one can add .. like elimination of injustice, or tackling of pressing issues in timely manner so that no section of society will ever want to become restless or violent….) we will only drive more people towards imaginary God who alone could come to their rescue.
Nirmukta should allocate more of its attention on that kind of rationality. A satisfied mind forgets God much more easily.
• There are many problems which are not even remotely connected to religious beliefs. But which one do you think is a bigger threat, global warming or Iran inching towards getting a nuclear warhead? What if many people believed that global warming is just a hoax perpetuated by government agencies? Would you not see that as a hindrance towards taking steps to tackle the problem? And as far as I know one can atleast publicly oppose these kind of delusions but with religion one cant even do that. Drawing some cartoons of an illiterate businessman about 1400 years after his death can lead to destruction on a catastrophic scale. Thats how powerful it is and calling spade a spade is I think the first step towards eradicating it.
• Vinod K. Wadhawan
Dear Vaibhav:
Goaded by your comments, I decided to write an article about Indian scientits. It is already published on this website.
• Dear dr Wadhawan,
I found the link to your article in the Brights website, a lucky hit indeed.
On a personal tone, I enjoyed reading that you initiated this on becoming grandfather. The same happenend to me, becoming grandfather raised strong emotions. To celebrate this I went looking what’s the natural history of grandparenthood and published a post in my website (http://www.dentalcliniclugano.ch/blog/?paged=2).
Perhaps you’d be curious about it.
Greetings from Switzerland.
Giovanni Ruggia
• Vinod K. Wadhawan
Dear Mr. Giovanni Ruggia
I enjoyed reading your well-researched article on your blog. Your analysis confirms my suspicion that, as of now, grandfathers are pretty useless compared to grandmothers (unless they are rich!). This is particularly true for the prevailing conditions in India. But as you rightly state, evolutionary trends have been totally messed up by the highly complex processes of cultural evolution, as contrasted to the blind and purposeless processes of biological evolution. It is difficult to predict what things will be like even in the near future.
You mention the pelvic-girdle constraint imposed by the mother on the maximum size of the human skull and brain. I am reminded of Stephen Hawking. He was probably the first to articulate the view that this constraint would disappear when human embryos grow, not in the womb of the mother, but outside. Then we can evolve much bigger brains for ourselves. Among other things, that may also improve the relative importance of grandfathers, compared to grandmothers! This is just one example of what we humans are going to be doing in the near future, not to mention the huge possibilities that artificial evolution has in store for us (including superintelligent, post-biological, robots visualized by Hans Moravec and R. Kurzweil).
• Wadhawan sir,
This is incredible. I have been a rationalist since my college days, influenced mainly by reading Bertrand Russell and some of the Kannada authors. One argument that I often hear from people (parents & friends and well wishers) is that – “you haven’t seen anything yet. wait till you see some personal tragedies or become old..”. But, after reading your article and seeing that you are a grand dad and an atheist, I am truly inspired to continue in the rationalist path.
Congratulations, by the way, on being a grand pa!
• Vinod Wadhawan
Thanks Manjunath. In my college days I was influenced by Nehru and Russell. And also Lala Har Dyal, who wrote the book ‘Hints for Self Culture’. I learnt recently that Shaheed Bhagat Singh was a rationalist too.
• respected sir
i am a member of Tarksheel society Punjab, which is publishing a bi monthly in Punjabi language. can we publish the translation of this article regarding god. please reply.
the article and its presentaion is marvelous and it enlighten a lay reader who is not a student of science. how ever certain concepts like, ‘information’, ‘entropy’and interactions require more elaboration.
wishing u happiness and good health.
yours co travellor
Avtar gondara Advocate
24 District Courts faridkot 151203
• Hi Avtar, I’m the editor of this site. All articles published here are available under the creative commons license, meaning that you are welcome to translate and publish the article as long as you credit Nirmukta as the source and you credit the appropriate author of the article you are republishing. I will contact Dr. Wadhawan and mention your request. I’m sure he will be thrilled at your request.
• Vinod K. Wadhawan
Thanks a lot, Avtar, for your comments. As Ajita Kamal has written, we shall be only too happy to give permission for translation and publication in any language. For explaining terms like ‘information’ and ‘entropy’, I am writing a series of articles under the umbrella title ‘Complexity Explained’. These articles are also designed to promote rationalism. As you will see in due course in these articles, it is possible to understand the origin of life on Earth through the science of complexity; there is nothing ‘divine’ about the origin of life. You can access all my articles at http://nirmukta.com/category/writers/wadhawan/
• Vinod K. Wadhawan
This is in continuation of what I wrote earlier in response to Avtar. Translation of technical terms is a difficult task. I am not sure if, for example, the term ‘entropy’ has a Gurmukhi equivalent. Even if it has, the translator has to have some basic understanding of the science involved, if errors in translation are to be avoided. My articles on this website are already being translated into the Polish language. See, for example, the URL
for the article on God Delusion. The lady doing this translation work has told me that sometimes she has to consult experts for ensuring the correct translation of a technical term.
• Religion in its organizational sense is the use of a belief.
It could be viewed in a larger meaning of that use as being organized discrimination and the use of intolerence excused within the context of a god belief.
In my life time and in my own writing i have encounted people whom are unwilling or unable or a conbination of the two, to be able to seperate their emotional intolerence of others from a rational reality of thought.
Religion is laced foreward and backwards with endless moralizations of actions and lifestyle judgments that when compared to the reality of the nature of nature can only be summed up as bigotry,ignorance,and excused discrimination.
When this concept is viewed both from the historical use and present day examples one can find it hard if not impossible to see religion as an early anicent from of civil law used for cultural and civil comformity.
Law is in its basic meaning a creation of a punishment a socialty deems an intolerence action.
Evidence of this is exampled by the very sins it claims a god if offended by.
Evidence is farther exampled by how it has been used to punish and expell non believers.
All of which is perfect examples of bigotry, and willfuly planned discrimination.
The central problem with religion and its God concept lays in its denail of its own excused use and self creation of intolerence.
There is no possible way to explain a God when the meanings behind it is exampled by known human ignorance and known human intolerences.
The evil is its organizational use and the insanity is its unrealitic blindness of its own reality.
It is one thing to view only the science of science it is another to know the reality of human emotions,
The two must know of each other.
A fact is a fact only when ones can accpet the fact as a fact.
The reasoning of not seeing the facts lays in the refusal to open the mind to see and understand the fact.
Too often the fact is placed aside in favor of the bigotry of refusal to understand.
The only real enemy which has been responsible for more war,hate,child abuse,torture,.illness and death then all other evils combined is bigotry and its weapon of choice is ignorance.
Religion may never disappear until it is exposed for what it is.
We may win the battle of facts but the war will not be won until the reality of reality is exposed.
Bring in the facts but do not forget to add in how the enemy uses its weapons and what that weapon is.
The one reality that can never be disputed is that we are all a part of the world and we have no other home.
• Vinod K. Wadhawan
Thanks Dr. Magee. I am very impressed by the high level of discussion on evolution and complexity at the Richard Dawkins’ page.
• I God belief or to put it another way a religious oppinion.
I use oppinion over the concept of belief is that religion is about the use of a belief.
The central theme to a religious view lays in its moralizations. Which adds to the oppinion rather then the belief concept of a God.
There is no science to a God belief yet there is overwelming evidence as to the use of such a belief.
To put it in blunt terms religion can be very well argued and evident in its use as being organizated discrimination.
In this sense of its meaning there would be a great deal of evidence in human nature as to this being exactly its use and its meaning,
Cultural comformity its intent and discrimination its meaning.
Given the orgins of law and the times of its beginings it becomes almost impossible to not conclude it as being anything else.
Agruments of science against religion can not be won solely in the fact of reality of nature and science until religion is confronted by its routs,its use and its history both past and present.
It will remain a constant agrument until the reality of the reality of the nature of the human race and the exposure of its intolerances are fully revealed.
We have all witnessed the endless excuses used for an intolerence the time is long past to confront the motives for such narrowness of mind.
In this sense religion has no ground to stand on and science both the sceince of discovery and the science of known human behavior will win in the end and that ending will bring a better world for us all.
• Vinod K. Wadhawan
Very well put. I want to quote Bertrand Russell:
• Vinod K. Wadhawan
‘The one reality that can never be disputed is that we are all a part of the world and we have no other home.’ Well said. Here is Kahlil Gibran (‘Tears and Laughter’) on this subject:
Speak not of peoples and laws and
Kingdoms, for the whole earth is
My birthplace and all humans are
My brothers.
• In religion to preach that all must live a life a certain way is not only an impossiblity which has never happened it is equally a willfull display of an intolerent bigotry of any whom are different then the ones being intolerent.
Nature is diversity with out it there can be no evolution nor progress. no thoughts no change,no adaption no improvement.
This is the way of the universe, this is the way of existance to change to process to adapt.
Science is the search and the mind is the reason for the search.
To preach of sameness with endless comformity is a violation of the very meaning of existance.
To expect this and to preach that is a display of the a deep felt intolerence bordering on outright bigotry.
The God delusion is a delusionary excuse for an ignorance and the ignorance is a willful intent to excuse a disliking.
Consider for a moment the moralizing ,then ponder known examples of relationships, of sexist,bigotry,intolerence and compare this to the reality of all of nature, how it has always been.
Knowledge is the willingness to seek knowledge and the ablity to at least attempt an understanding.
That understanding can not be understood if a bias for the conclusion out ways the evidence of fact.
The differences in the world are here to learn of, the only question left unexposed is the willingness to understand.
We are all united in that we are all human, we are divided only by those whom can not see nor accept that we are all human.
• A good discussion going on.
• Many it seems become outraged at the thought of questioning a religious text. The fault may well be the possiblity of exposing ones self as to how they really are rather then the questioning of a religious text. To disprove a part of it brings in the possiblity of the entire religion as being wrong.
I mention religion as being organized discrimination based upon its judgmental policies of moralizing the value and lifes of others.
The human mind is filled with likes and dislikes, tolerences and intolerences. It is often displayed by sexism, racism,jealous,greed,envy,lust and sexual feelings of inscurites.
All of which as aboslutely nothing to do with a God belief and everything to do with the preaching of religion.
Which is more likely to be the true charactor of the species, a law of god or human emotion?
Which is seen on a near daily base? Which is more often the case?
Which is evident in the history of the beast?
Which is used by the species?
Which as created laws around such behavior?
To understand the intent is to understand the motives of intent.
When compared to well known practice and exampled by evidence which is more likely the truth.
Religion is by its practice and its motive an anicent form of creation of civil laws of conduct in a time of ignorance ,in a world of fear. A practice based upon a willfilly intended discriminating bigotry of ignorance missundertandings.
In the anicent world of its birth this can be excused for its lack of available knowledge and inablity to research in a time of a hand to mouth world of existance.
In a world in instant chat,instant excess to references of research and the ablity to cross reference ignorance can not longer be excused.
Ignorance in a world of such instant excess is now a willful intent to be ignorant, and this same ignorance of intent is the orgins of religion.
The enemy is the same it has always been. this enemy is not a god or no god and this enemy has been the creator of war,hate,illness, fear, torture, child abuse.
It has been responsible for more evil then all other forms of evil combined and its name is bigotry.
Its cure is exposure and its destruction is un bias thougthful knowledge that is available to all.
Its only requires a willingness to see it.
We are all a part of this planet, we all share many of the same feelings, cut us do we not bleed ,prick us do we not feel pain.
The same emotional needs that seperate us are the same ones that draw us together.
The wall of bigotry and ignorance was built one brick at a time.
We can remove it the same way…one brick at a time.
• Your article has been saved to show my daughter. Its clear and insightful. A fathers words are seldom as clear. She will be returning from college next week. I also found a version of bhajans by Jagjit Singh. It is wonderful in a sense similar to enjoying Enya. My music collection is eclectic. I had no need to know why the smoke was at the feet of the fancy lady and the cow.
Don Dahlgaard Norton MA usa
• How strange. An odoriferous green winged heart appeared above my comments on the screen stating (Don Dahlgaard says:Your comment is awaiting moderation.) no I didnt, why would I, strange.
• Thank you for that great post. I am sharing this to my facebook contacts. all the best!!
• showing the evidence of the reality of reality is only relaying the reality of it to those whom see and understand it.
Or to put it more bluntly to those ablity it see it for what it is and not confuse it with an intolerence of any who live a life in the reality of it.
Faith without reason is insanity without thought, Religion is insanity without thought.
We know this we see it on a near daily basis.
The problem is the use of discrimination and the use of by and within the context of a religion.
The way to remove this is to expose this, the battle is not and has never been a science over religion,it has always been a bigotry excusing itself in a battle against a reasoning mind.
The evidence of reality is overwelming in its proof yet this proof can never win in a war against a bigoted intolerent outlook that is unable to stand the thought of being in a world that is filled with people whom don’t life the way they want them to.
Religion is centered around the use of this, In this battle the war is about exposing this, in a battle of exposure religion can not win.
To win this war the only requiremnet is in that constant reminding of what religion does and has done and to turn the faith into a faith of human possiblites.
• Hello
• There is little question that in time human kind will explain the orgins of life and what happens after life ends.
This will happen for the very same reasons that human kind explores the universe. looks into the sky,builds machines, creates civilaztions, and traveled across the seas . because of the same question that is always asked “WHY’and continues to be answered with “WHY NOT”
A mystery only remains a mystery when the question is not asked and the wondering remains just a wondering when it assumes it can not be answered.
• The phenomenon of “Life” is a mystery which science has yet to unravel fully. Living beings experience great fear when faced with death or extinction. It is this mystery-cum-fear which drive humans towards a God, and, naturally to one or the other religion. Since it is this Life which concomitantly gives rise to intelligence, which is at the base of all human endeavour, including science and scientific progress, it seems impossible that the human intellect will be capable of solving the riddle of the phenomena of Life and (more importantly)Death and what happens after that.
Is it likely that there is a universal Life Force Field (just like the Gravitational Field) which causes all life forms to manifest whenever and wherever there are conducive conditions and stops manifesting as soon as the physical body becomes incapable of supporting it?
• Vinod K. Wadhawan
1. Complexity science already has a very substantial answer to ‘how life arose out of nonlife’. Please see my 17 articles at http://nirmukta.com/complexity-explained-the-complete-series-by-dr-vinod-wadhawan/
2. Any attempt to postulate a new fundamental interaction in Nature will have to face Ockham’s razor (OR): One cannot introduce more postulates or axioms than what are necessary for explaining natural phenomena. OR is not just a matter of philosophy. It has been given a clear validation in terms of algorithmic information theory.
3. In any case, let somebody try to postulate such a fundamental force, and give a consistent list of its expected properties. Scientists will then check it against experiment. My view is that there is no evidence that such a force exists.
4. If somebody says that certain things are beyond the domain of science, then for such things we can only have tentative opinions. And my opinion is as good as that of the man next door.
5. I can think of no method other than the scientific method for knowing truth. Science does not have all the answers at present, but what else can we do about it?
• Narayan S Amin
Interesting. Fact is fact. Only through science man can find answers to his doubts. Please put some light on what is Black Magic (mata, mantra). Lot of people are afraid of this black magic. How can be understand that it is just deceiving others.
• Vinod K. Wadhawan
I never felt the need to try to understand black magic. There are much better ways of spending my time. As awareness about science spreads, and as literacy spreads, black magic will disappear ‘like magic!’
• Himangsu Sekhar Pal
Part A. Some Reflections on God and Science
“Tegmark’s Ensembles
Tegmark has recently proposed what he calls “the ultimate ensemble theory” in which all universes that mathematically exist also physically exist (Tegmark 1997). By “mathematical existence,” Tegmark means “freedom from contradiction.” So, universes cannot contain square circles, but anything that does not break a rule of logic exists in some universe.”
(From: The Anthropic Coincidences:
A Natural Explanation
Published in The Skeptical Intelligencer, 3(3, July 1999): pp. 2-17.
By Victor J. Stenger)
So here we see that as per Tegmark mathematical existence implies physical existence. From the following equation of special theory of relativity
t1 = t (1-v2/c2)1/2
one can see that if one can move with the speed of light, then he will be immortal. Because when v = c, then for any value of t, value of t1 will always be zero. Even if value of t is an eternity, till then value of t1 will be zero. So in one frame of reference whole of eternity may pass, but in another frame of reference not a single moment will elapse. Whoever will be in this second reference frame, will be immortal. Because even in the whole time span of an eternity he will not be older by a single second. So from this equation we see that immortality has got mathematical existence. But as per Tegmark mathematical existence implies physical existence. Therefore we can conclude that immortality has got physical existence also. This means that there is an immortal being in this universe.
In his article “Ten Things Wrong with Cosmological Creationism” Richard Carrier has written:“When we posit a god, we are left with almost no predicted observations–theism does not predict any physical feature of the universe that we can check.”
But this is definitely not true. First of all one will have to decide whose God one is considering. Is it Abraham’s God? Is it Jacob’s God? Or is it mystic-philosopher’s God? If it is mystic-philosopher’s God, then definitely some physical features of the universe can be predicted that can be checked and verified by the scientists. Philosopher’s God is beyond good and evil, one, all pervading, spaceless, timeless, changeless, immortal, etc. Since God is all pervading and spaceless at the same time, so volume of the entire universe must have to be zero. Otherwise, how can that God be spaceless? So, this is one prediction that can be made. The next prediction that can also be made is this: existence of a spaceless, timeless being in this universe implies the relativity of space and time. I have written a book in Bengali (published in 2003) in which I have shown in some great details as to how a spaceless, timeless God implies the relativity of space and time. And this last prediction has already been found to be correct. Since God is one and since everything in this universe has sprung from that one God, then everything in this universe must be ultimately reducible to one thing. This is another prediction that can be made.
Another prediction that comes to my mind is this: God is said to be timeless. If God is really there and if that God is timeless, then there is some sort of timelessness in this universe. For timelessness to be there, time must have to be unreal by some means or other. So God-theory predicts that time must have to be unreal by some means or other. And science has shown that it is just the case. At the speed of light time becomes unreal. If there is no apparent reason for time becoming unreal, there is at least one reason as to why it should be. And that reason is God’s timelessness.
One more prediction: God is said to be immortal. So here God-theory predicts that immortality must be found to be written somewhere, in some scientific theory or law or equation. Here also we find that science has not betrayed us. From the following equation of special theory of relativity we can see that if one can move with the speed of light, then he will be immortal.
Now one question will definitely arise here. Is deathlessness same as timelessness? Is there no difference? This question arises because I have used the same equation for showing as to how one can be timeless as well as immortal. The answer to this question will be a very big YES. Death means some sort of change. I am very much alive at this moment. But at the very next moment I may die. But in a timeless world this very next moment will never come. So a timeless being can never die.
So, it is not true that God-theory does not predict any physical feature of the universe that we can check. As per the definition of a good scientific theory given by Karl Popper, God-theory can be considered to be a very good scientific theory. Because it can predict something that can be checked and verified, and so it can also be falsified. Only those who are heavily prejudiced against God will decline to admit it.
Scientist Victor J. Stenger has written:
“Mystics state that their experience of oneness with God and the universe cannot be described in scientific terms. The more rational statement is that this experience is all in their heads.”
But the problem is that if this God is in mystics’ heads only and not in the outside world, then whatever predictions can be made from God-theory, if at all correct, should be correct in their heads only, and not in the outside world. But since some of these predictions have already been found to be correct in the outside world, then the more rational statement is that this God is in the outside world and not in mystics’ heads only. Or, it may be that, these mystics’ heads are so very big that, like God, the entire outside world is also in their heads. That is why predictions made from God-theory have been found to be correct in the outside world. In that case mystics’ heads must be as big as the universe itself.
Generally two things are claimed about science:
a) Science always deals with something that is real, and not with something that is unreal, imaginary. It is in man’s power imagining anything and everything, and actually he has imagined so many things, so many worlds, and so many beings. But it is not the job of science to prove that all these imagined things, imagined worlds, imagined beings are as real as this world.
b) Only science, and no other discipline, can give us the true picture of reality.
Keeping these two claims about science in our mind let us proceed further to see what conclusion can be drawn from the following equation of special theory of relativity:
From this equation we have already seen that if one can move with the speed of light, then he will live eternally. So we see that here science has dealt with the idea of immortality, and that it has also shown as to how that immortality can be attained. But if the claim about science that it only deals with what is real is true, then we must conclude that like change and mortality, immortality is also a real feature of this universe. Otherwise, why has science dealt with that? But immortality can be a real feature of this universe if, and only if, there is at least one immortal being in this universe. So the presence of the above equation in a scientific theory clearly indicates that there is at least one immortal being in this universe.
But if one is loathe admitting the existence of God, then one will have to admit that while in most of the cases science deals with something that is real, sometimes it also deals with something that is unreal, imaginary, and untrue. In that case one will also have to abandon the claim that only science can give us the true picture of reality. In the above equation science has created an impression that attaining immortality is not an impossibility whereas actually no one can be immortal. So here science has simply baffled us, confused us, misled us. And if we are allowed to use a very bad term here – I hope we will be pardoned for that – then we can even say that by showing that it is possible to be immortal, science has given us a very nice and beautiful bluff. Like so many religious bluffs, it is also a bluff, in this case given by science itself.
So the gist of the whole matter is simply this. Science cannot hold the following two propositions as true simultaneously:
1) God, or, any other immortal being, does not exist,
2) Only science can give us the true picture of reality.
If any one of the above two propositions is true, then the other one must be false.
Mystics who have claimed that they have direct experience of God have repeatedly and unanimously told us one thing: time is unreal. If one claims that God does not exist and that mystical experience is nothing but a mere hallucination, then he must show that mystics were wrong in holding that time was unreal. Here common sense says that to do this one must have to show that time is not unreal and that in no way can it be unreal. But here science has done just the opposite; it has shown as to how and when time will become unreal. But to show that mystical experience is nothing but a hallucination, one must have to show that mystics’ view regarding time was completely mistaken. As science has miserably failed to do that, so by what kind of logic is it established that mystical experience is a hallucination? If mystical experience can no longer be discarded as a mere hallucination, then by what kind of logic is it established that God does not exist?
When man did not know that time could be unreal, his labeling of mystical experience as a hallucination was fully justified, logical and reasonable. But once it has dawned on him that at the speed of light time could become unreal, his discarding mystical experience as a hallucination is totally unjustified, illogical and unreasonable. And, it is unscientific also. As per definition a hallucination is a sensory perception without a source in the external world. When the mystic says that time is unreal, he is definitely in touch with some state where time is unreal. If he were not, he would not have said time was unreal. But he wrongly and erroneously thinks – and believes also – that this timeless state is in the real, external world. But if mystical experience is nothing but a hallucination, then as per its definition this timeless state cannot be in the real world. Because, if this timeless state is in the real world, then mystical experience is not a hallucination. And if mystical experience is not a hallucination, then it cannot be said that God does not exist. But since atheists and scientists claim that God does not exist, then mystical experience must have to be a hallucination. So, if necessary, then even by hook or crook, it will have to be established that mystical experience is nothing but a hallucination. For that it must have to be ensured that this timeless state can never be in the external world. And for that, it must further have to be ensured that time can never be unreal in the external world. But we find that this last condition is not fulfilled at all. It is not fulfilled because science has shown that at the speed of light time becomes unreal. Since time can also be unreal in the external world, then there is every possibility that this timeless state is in the external world. And if this timeless state is in the external world, then mystical experience cannot be called a hallucination. And if mystical experience is not a hallucination, then God is real.
Science is supposed to deal with something that is real, that is existent, that is of this world, and not with something that is unreal, imaginary, and non-existent. If God does not exist, then that God is a fictitious, imaginary Being. Whatever has been said about that imaginary God cannot be true, cannot be real. If God does not exist, then there is no one in this universe about whom it can be said that He is immortal, spaceless, timeless, all pervading etc. So, if God does not exist, then the terms immortality, spacelessness, timelessness etc. will have no meaning at all. These are all imaginary concepts attributed to some imaginary Being. Then why will science, which is supposed to be concerned with only what is real, what is existent, what is of this world, show that all these imaginary concepts have got some sort of scientific explanation? Why will science show that if one can move with the speed of light, then one can be immortal, timeless, etc.? If God is also not real, then how do those imaginary concepts attributed to that imaginary being somehow become part of a real world by being explained scientifically?
Has science ever been found to give proof for the existence of any non-real, imaginary thing? Has science ever been found to give proof for the existence of any non-real, imaginary being? Has science ever been found to offer explanation for the occurrence of any imaginary event? Is science famous for doing all these things? Has science proved that ghosts are real? Has science proved that there is a place called heaven where every human being goes after his or her death? Does science think that real human blood can come out of the wounds of a stone or wooden Jesus? Can one give any single instance where science has supported any single human superstition or folly? If science has never been found to give proof for any single imaginary thing or being, and if science has never been found to offer explanation for any single imaginary event, then why is it that it has on its own given explanation for these imaginary concepts? Why is there an exception here at all? What is the reason behind this? What does it want to make us understand by giving scientific explanation to these imaginary concepts? Does it want to make us understand that these are not imaginary concepts at all? Does it want to make us understand that these are real concepts having meaning and significance in some real context in a real world? Does it want to make us understand God is real?
Perhaps this is the greatest irony in the whole history of our human civilization so far: science has explained that very God whose existence it has vehemently denied. If God does not exist, then those scientists who have given us special theory of relativity should not be called proper scientists at all. And if God does not exist, then special theory of relativity is not a proper science at all; it is simply a pseudo-science, something like astrology. To call it a science is an insult to human reason and understanding.
The problem is that in order for this equation to be true you have to be talking about a material object(being). When V=C you are saying that the object is going the speed of light. This can’t happen as the mass becomes infinite at C. In short you would turn into your own black hole. Furthermore, it would take an infinite amount of energy to get to C. All that is impossible. Now if you are talking about an immaterial being, then none of the equation applies.
An immortal being in literature can usually do stuff. The type of immortality described here consists of existence as a popsicle, frozen in time. This would be no fun at all!
OK: a timeless being can’t do anything because events happen in time. Sure you’d be immortal at the speed of light, because time would be frozen for you. Behold the incredible frozen God!
Predictions only count if, well, they are made in advance of the finding. Already knowing the findings of relativity theory and then claiming that your version of God predicts them is, if not delusional, at least cheating.
Plus I don’t see any good reason to accept Tegmark’s proposition that mathematical possibility implies physical existence anyway. With only one universe to observe we can’t make ANY substantial claims about the probability of any of its properties. We have no way of knowing whether physical laws could have varied at all, let alone by how much. Theological skepticism doesn’t need multiple universes to explain why this particular universe is only 99.999999999999999999999999999999999999999999999999999999% inimical to life as we know it instead of 100% because you can’t determine the odds when all you’ve seen is one result. Maybe there are multiple universes, maybe there aren’t, maybe all mathematically possible universes exist, maybe they don’t. None of these situations has positive implications for the existence of God absent evidence that God is not imaginary.
Regarding Tegmark’s argument: Here my main intension was not to prove the existence of God, but to expose the hollowness of his argument. If scientists claim that mathematics can prove the existence of multiverses, then I will also claim that mathematics has already proved the existence of a timeless, deathless being, in which case we no longer need any multiverse theory to explain the fact that our universe is life-supporting.
Regarding immortality: It may be there is no immortal being in this universe. It may be there is no God. But the fact still remains that science has shown that in this universe to be immortal is not an impossibility. For that only one will have to be massless, because Einstein has shown that anything having zero rest-mass will have the speed of light. So, if there is a being that is massless, then that being will be immortal. If human being possesses a soul, and if that soul is massless, then that soul will also be immortal. Here the question is not whether a massless being does at all exist. Neither is it a question whether human being really possesses a soul or not. The real question is: why in this universe has it been found that it is not impossible to be immortal? The real question is: why has Mother Nature kept such a provision in its scheme of things? And, for whom has it kept that provision?
Now regarding cheating: This charge of cheating brought against me is baseless, as anyone going through my article carefully can find it out himself. Let me first quote what has been written in one of the responses:
“Predictions only count if, well, they are made in advance of the finding. Already knowing the findings of relativity theory and then claiming that your version of God predicts them, is, if not delusional, at least cheating.”
So, there is no doubt that I have cheated. But the person who has brought this accusation against me has forgotten that in my article I have mentioned that at least five predictions can be made from God-theory, out of which only three have so far been found to be correct. Let me repeat them once again:
a) Space and time must be relative,
b) Time must have to be unreal by some means or other,
d) Volume of the entire universe must be found to be zero,
e) Everything in this universe must be ultimately reducible to one-thing,
In the first three cases above he might have said that I have cheated, because, really, these are the findings of relativity theory. But if he holds that I have cheated in the other two cases also, then will he please take the trouble to give us the name(s) of the scientific theory/theories of which these are the findings? If he cannot, then he should admit that he has brought a false and baseless charge against me, for which he should apologize. Actually, by showing that these two predictions can also be made from God-theory, I have taken a very great risk. Because if they do not come true, then one day God-theory will eventually be falsified. And then there will be no hope left for us.
But still I think there will be some hopes left. At least what the Russian scientist Andrei Linde has said to Tim Folger in a completely different context raises some hopes in us. Let us first see what he has actually said:
[From: Science’s Alternative to an Intelligent Creator: the Multiverse theory, By Tim Folger, published online November 10, 2008, DISCOVER Magazine.]
So, here lies the hope. First, eliminate all the impossible theories. Then the theory that remains, even if improbable, must be the truth.
As per the scientists God does not exist, because so far there is no proof for His existence, and perhaps there will never be any. But it is also true that man believes in God. So, it is a fact that man believes in God in spite of the fact that there is no God. This fact also requires some sort of explanation. Some explanations have been offered so far by some eminent thinkers and philosophers, but none of these theories are adequate enough to explain certain aspects of that imaginary God. So it can be said that all their theories, all their hypotheses are failed theories, failed hypotheses.
If God does not exist, then God did not create man, instead man has created God. So it is quite expected that he will create that God in such a way that He can satisfy all his needs. Man will definitely not create a God who is not merciful. He will definitely not create a God who is not immortal, because a mortal God cannot bestow immortality on others. For that purpose God Himself must have to be immortal. These points are easily understood. So we can understand why man-created God is benevolent, merciful, all-loving, all powerful, immortal, etc. & etc. But what about that God who is spaceless, timeless? Why was it necessary to imagine that God as such? What are the specific needs of man that can only be met by a spaceless, timeless God? If God did not have these attributes, then what would have been lost to man? A real God might have to have these attributes; there might be some philosophical justifications for that. But why should an imaginary God? Does anybody have any answer? Then what about Hindu’s Brahma who is indifferent to man’s sorrows and sufferings? What about that Brahma who is without any qualities, without any attributes (Nirguna Brahma)? A Nirguna Brahma cannot have benevolence even, so He cannot even do any single benevolent act. So what purpose does such a Brahma serve to man? Man can easily do without Him. And so, why in the first place will he take the trouble on him to create such a Brahma, and then declare that He doesn’t care for us? All such queries remain unanswered, unexplained. So all these theories, all these hypotheses so far offered to explain man’s belief in God are impossible theories, impossible hypotheses. So, according to Konan Doyle, they need to be eliminated mercilessly. Therefore the only theory that ultimately remains is the correct theory. The theory that simply says: Man believes in God, because there is a God.
• Rationalists are trying to rediscover the world and universe.
Their hypothesis is that God concept is unnecessary.
We must honour their sincerity.
Any anomaly that can not be resolved by our present hypothesis, theory and experience, is a sure shot invitation to new openings, and is also a welcome endavour.
To say that scientific methods is best, is ofcourse correct.
To believe in scientific dogma, because we hate other thought processes is wrong.
Experience of flying in a non stop flight to USA from India is a reality, no body can dely. Travel in to a timeless state is not that common. The truly religious/ spiritual gurus claim to make it possible for their followers. But obviously the throughput is so less that ‘others’ including rationalists can not relate to it.
Like one can relate to the description of non stop flight to USA.
Just remember that to a person without visa, money and opportunity to travel , the later description is as hollow as the earlier one.
• Timelessness,immortality etc are properties of the universe at certain special conditions. It doesnt mean that those properties are the ones that created the universe. your argument is built upon one fallacy over another fallacy.
I dont have a problem if you want to call the primordial fire ball before the big bang or light itself(since it alone can travel at the velocity C, hence be timeless and immortal) as God. but does it make sense to pray to the several
constants and forces of the universe… “dear force of gravity, heal all my sins” , how does this sound?. there is certainly a difference between beleiving there is a zero mass particle in the Universe and beleiving the zero mass particle created the universe. All of your evidence are not proofs for the existense of god… they are proofs for the existense of timelessness, immortality etc in the Universe from which it doesnot follow the existense of god.
• I hope it would not be long before people have to argue for why they believe in a personal god to a larger proportion of the population, than the contrary (which is happening now).
• There is an estimated 850.000.000 atheist and those whom question the existance of a God. This is found in the Atheist Empire which a web site all can research making those of this form of thinking the 4th largest group in the world.
In my own research i have found that the more educationed and by educated i do not mean just schooling. i also mean people educationed that is to say the more one knows of others and is not perjudice in others before knowing them the more enlightened one becomes.
Ignorance is not just the not knowing of something ,often its a willfully intented ignorance.
The debate about religion can not be just the science verses the belief.
It is the willfully ignorant verses the educated.
When I ask a person why in the face of such knowledge and in the facts of such a history of religion, why do you still belief?
Some wish to hedge their bets, meaning they are not sure but just in case maybe there is a God I should at least leave that idea open in my mind.
Still others also fear a non-cultural comformity (they don’t want to be different)
Others will say because it tells people the proper way to live.
That last quote of “it tells people how to live” is by far the absolute most revealing use and meaning and purpose of religions.
A use of a belief to discriminate for purposes of social order that uses the concept of a belief and intolerance of selective lifestyles to enforce it.
This evidence of how it is used is widely evident both in past history and present day.
Examples would be the role and placement of woman in a sociality, selective sexaul lifestyles perjudice and intolerence punishments of woman in adultry which is selective toward the female and not the male in that punishment. It takes 2 to have an adultry yet it is the woman that is punished?
When asking others as to why they do not like religions they nearly in all cases state how judgmental and or how closed minded it is.
The problem in the battle for a human sense of equality and basic equality of justice lays greatly in the acknowledgment of a persons perjudice and personal fears.
religions tend to be a resting place for a justified excused intolerence.
The rout core battle in about that excused justification and exposing the reality of it being a perjudice and a bigotry.
Education not only in a school but more importantly in human interactions will in the end win this battle and in so doing end the wars.
The only requirement is a willingness to open the mind and to learn without a prejudice as to want you wish to only learn.
2 blind men are standing next to an elephant.
the first blind man is touching along the long trunk saying “An elephant is like a snake”
The other blind man is touching along the legs. “No your wrong an elephant is like a tree”
There is none so blind as those whom will not see.
• Dear Dr Wadhawan,
Being a scientist myself I find it hard to come to grips with the following: “The influx of solar energy into our ecosphere drives it away from equilibrium. Any system away from equilibrium will naturally tend to move back to equilibrium and (concomitantly) towards a state of higher entropy (as dictated by the second law of thermodynamics).”
The notion that the influx of solar energy which you treated as equivalent of negative entropy, is possibly wrong! Influx of energy/heat increases the entropy/disorder of any system. For example, you heat water (equivalent to receiving heat from the sun) and you increase its entropy. Kindly elaborate on this point (i.e., how solar energy influx can be interpreted as ‘negative entropy’.
Thanks for the remaining brilliant points.
Just tempted to guess that you are possibly a metallurgy/material scientist!!!
I graduated from the Dept of Metallurgy IISc Bangalore (did project with Professor KT Jacob). Being associate editor of phase equilibria, I reckon you to be possibly familiar with Prof. JAcob. Hope my academic kinship with yourself will attract your attention!
• Hello Randhir. You guessed it right. I am a materials scientist ALSO! I have written books on ferroic materials, smart structures, and now on complexity. My latest book is: ‘COMPLEXITY SCIENCE: Tackling the Difficult Questions We Ask about Ourselves and about Our Universe,’ Lambert Academic Publishing, Saarbrücken (2010), ISBN 9783838377544. You should actually read this book to get a detailed answer to the question you raised.
1. We are dealing with OPEN systems here.
2. For an open system the second law has to formulated in terms of free energy (F = E – TS). The law now says that F tends to the minimum possible value. This means that there is now scope for even an INCREASE of the TS term, so long as the decrease in the enthalpy term E is larger than the increase in TS. This is what happens in crystal growth, for example. A highly ordered structure (crystal) emerges from the highly disordered fluid. The binding-energy term overpowers the decrease in entropy.
3. Entropy is defined by dS = dQ/T. The T value for the Sun is much much higher than that for the Earth. So the entropy of photons coming from the Sun is very low when they reach us. They undergo a number of processes which ‘degrade’ them to states of higher entropy as the system moves towards a state of equilibrium. But there is not a total loss. Some of the energy gets trapped in, say, plants, fruits, grains etc. We and other animals eat this food to stay alive, i.e. to stay away from a state of equilibrium.
4. Equilibrium is death. The entropy is the highest at equilibrium. This means that entropy is lower when a system is away from equilibrium. The food we eat enables us to stay away from equilibrium.
5. Suppose we take the equilibrium entropy as the base value, i.e. zero. Then the nonequilibrium entropy gets a negative sign on this scale. That is the meaning of negative entropy.
6. As explained in my book, in the jargon of information theory, entropy is also equivalent to missing information. Both are measures of disorder or LACK of information. That makes negative entropy a measure of AVAILABLE information.
• Dear Sir,
Its indeed an honour to have a response from authors of your stature. Thank you.
However, I am more confused now.
1. If your point 1 is valid for the subsequent points as well then statements in point 4, “The entropy is the highest at equilibrium.” appears less-rigourous (than to say that F is minimum) in the light of point 2.
2. If “The entropy is the highest at equilibrium.” is true, then the sentence 5 is contradicting it!
3. Though the meaning in point was obvious to me, I would prefer to have ‘-‘ sign wherever TS has appeared (2nd occurrence onward).
• On negative entropy: This comment (Courtesy of commentator ‘Bonzai’,at richarddawkins.net) appealed to me!
“Actually the earth radiates the same amount of energy back to the sky as it receives from the sun, otherwise the temperature of the earth will keep on rising (forget about global warming for a sec)
What makes a difference is that the radiation from the sun has short wavelength whereas the infrared radiation that the earth reemits has long wavelength.
If you think of radiation as a stream of photons, then longer wavelength would correspond to less energetic photons. Now if the earth reemits the same amount of energy as it receives from the sun but in the form of less energetic photons, it has to emit more photons than it receives to make up for the balance.
Now entropy basically counts the number of ways you can arrange a system. For the same amount of energy a situation with more photons would correspond to a larger number of configurations, hence higher entropy.
So in summary, the earth receives energy in a low entropy from from the sun and reradiates it in a form with high entropy. That means, on the entropy balance sheet the earth gives out more than it receives and hence has a net loss of entropy. Thus, in this exchange the sun acts as a source of negative entropy, or order, for the earth.
Another way of saying it is that the earth converts the organized energy it gets from the sun into a less organized form, thereby increasing the total disorderliness in the sun plus earth system in accordance to the second law.”
• Why would any person continue to follow or quote selective passages from or associate themselves with an organization that believes in a God?
This of course begs the next question of why would any person not associate themselves with any belief in a God or contradict passages from organizations that belief in a God?
Terms such as liberal and conservative are labels used by to certain people to identify them.
Environmentalist ,gays , women activists, adultery, blacks, Mexicans white people.
These are also terms chosen to identify groups.
Why in a world of constantly increase temperatures and littered oceans with polluted air visible for any to see would anyone think that the human race could cause no harm to its planet and not even for one moment think it even possible?
Why would a person consider a sexual preference that has been in existence since all of time and exampled by millions upon million in its practice. That this is somehow un-natural and placed here by some evil force?
Women are just as human as a male, just as able to create to built as any male, why would a male let alone another female think this sex is less able or inferior to the other and hence fore must submit and obey the other sex?
Why in matters of sexual affairs be them married or widowed must to woman be punished for doing the same actions as a male?
All races of the world are equal in that they are all human.
The answer to all of this lays in the perceptions, of willfully intended ignorance’s of the one that is perceiving it and much of this lays solely in their personal fearful perceptions of themselves and their perceived place in a sociality.
The herd syndrome, the lamb with the bell around its neck leading the rest into the butcher house ,the pack of bulls all running because two of them jumped and ran fast.
Peer pressure to conform, an invisible yet even present force of roaming eyes looking over how another person walks or dresses or speaks or the others it maybe speaking to.
The other boy is skinny and looks to feminine, that girl is dressed to revealing.
I was in a store on day just buying a few thinks to make for dinner. Across the isle when the meat was displayed a woman was picking out a few pieces of chicken to make a meal. the male with her looked around to see if anyone was noticing what he was getting. He noticed a group of other males that looked like the football players standing a few feet away. He then proceeded to put on a small display of macho movements kind of resembling promenading rooster in a barn yarn saying. “Chicken..uh ..I am not going to eat that get some steak!”
The woman with him rolled her eyes up then down and bought the chicken anyway.
Ever noticed how a person walks in public? Or how a male or female will act differently around another male or female in public?
Religions surround us every single day of our life’s so much so that it often goes un-noticed.
How so?, it is on the currency we use. “In god we trust“, it in is settle words we use “God damn it” ,Jesus Christ” or things like “I pray this works”. One can hardly turn on any television station be it the news or comedy without hearing so sort of religious word.
The pressure of culture peer pressures to conform are extreme.
How this has continued to exist in a world filled for endless research and worldwide reference of research and human interaction well known be even the most naive of people would be bewildering if it were not for the herd syndrome of self perceptions with its mate self esteem.
It is easy to understand how a belief system plays into peered enforce conformity.
Prejudice is a powerful force when its left un-checked, when its excused even justified to the extend of the use of a concept of creation of all things.
To preach of a better life in death for following a religious standard of morality is perhaps the most dangerous form of preaching which has lead to the murder and torture of many an innocent person.
The act of terror in the destruction of the twin towers on 9/11 is an example yet another is the suicide of so many during the times in Jim Jones.
Death is not the beginning of a better world it is the end of ones life. What we do in life determents how we are remembered in death.
Religions have absolutely no place in public education.
Why?, public education is the place of research, factual ,applications without bias in an open arena of questions.
To expand the understanding of math, science, social study to open a creative mind.
Religions have a very long history of attempts to close the mind to questions, to conform a sociality.
The seemly endless debate over evolution, to intelligent design to the more resent Texas school book debate attempts to rewrite the historical record and center it around Christian and Christian only reasoning.
When the term religion being based upon the use of excused prejudice with the intent to discriminate for social conformity to tract record is impossible to not notice.
Its very existence would of never existed if this were not only true by its present and past track record by equally true of those preconceived concepts people form of others.
From the child in school being bullied for looking different to the woman only being stoned to death for adultery as well as how people act in a crowd or treat another in public.
Yes the science of knowledge is the science of reason ,the evidence of evolution is visible yet the enemy is not in the knowledge of learning this, it is in the ignorance of refusal to see the forest for the trees.
The single enemy has remained the same,. Its name is bigotry.
These are also terms chosen to identify groups.
The pressure of culture peer pressures to conform are extreme.
Religions have absolutely no place in public education.
Trial of beliefs is a book about personal beliefs and how they are used.
If a person choses to have some sort of belief in a God that is their personal view. This must never interfer with the quest of knowledge and must never be imposed upon another or used to excuse and injustice.
It must be open to question for to question is to learn and to learn is to understand the humanity within humanity and must never get in the way of humanine humanity.
This belief some have never never really disappear from the face of the earth but its evil can be exposed and removed.
The beginning of its end may have started in the age of enlightment when humanism and the equality of it began to be considered.
It is estimated that over 850.000.000 people consider themselves to be either atheist or not a member of any organized religion and or at least in question of religion (source Atheist Empire)
That makes it the 4th largest group of the planet.
Be mindful that to win this war it means to always stand upon the high ground and never allow a perjudice to guide you other wise one becomes just as the enemy has been.
• It is entirely possible that if there is a God creator then the only language this being is able to speak is in the creation.
All else is a human personal understanding of it, the prejudice and the judgmental of it are those whom are unable to understand the language of a creator.
The lesson may well be that the differences found in this creation are the reason and purpose of the mind to learn of it all and to know ones place within it.
If one does not have a God belief that lesson is the same.
The world has been changing for those very reasons, the more we learn of others the more we share more then we do not share.
This embraces all of humanity, its spirit, its mind its research of discovery, its science and holds no prejudice to conform all of it to a single way of existence.
It holds the human race to recognize its faults and to learn from them.
It holds it accountable to learn and reasons the purpose of a creation or an evolution of it.
Embracing the spirit and the mind, the belief in a God and no belief in a God yet does see its spirit in both views.
Evolution is a learning to evolve a creation is a building of something. Nothing can be built if there is no purpose in building it., nothing can evolve if it can not fit into it all.
A bird does not jump from the egg into flight, it must learn to fly.
A human does not jump from birth to walking and speaking it must learn to walk and the language must be learned.
All the differences are not the same and can never be the same, all of existence shows this.
So many still stand in a valley surrounded by trees and yell how they can not see the forest.
Religions confuse this telling of one thing then the other; it allows a prejudice of opinion while not seeing the reality of it all.
It stands in the valley covered by trees and yells it can not see the forest while telling all that the trees must be removed so the forest can be seen.
A belief in a God or no belief in a God must learn that the forest is not hidden it surrounds us.
The spirit can not be confused with the mind the two rest in the same place.
A dog or a cat has many breeds. The dog and cat are still a dog and a cat, there are many different birds yet they are still birds.
Human kind has many breeds yet they are still all human.
We all share in being human. Understanding its many breeds does not separate us from all being human.
It is in the connection of all being human that removes its prejudice of its many breeds.
It is fine if one connects the dots to the total picture in a God belief it is fine if the dots fit together in a non-God belief. The dots still fit. The many breeds are still a part of the same.
A religion or belief be it a God one or not in the final examine or observation is the character of the person.
The immorality is believing your personal morality must be in all others, proper morality is in practicing it and not preaching it, outlawing or beating another because they do not fit into this personal morality is not only un-ethical it is a bigotry of intolerance.
Tolerance does not mean to accept everything it only means to tolerant that which is self evident in the lifestyles of others that harm no one.
When someone speaks to another that they love or like them just as they are, it is a powerful message that cuts deep in the sole of humanity.
It cuts in so very deep because it is just what humanity has always wanted and always needed.
It is a message of such power it can not be overwhelmed or stopped.
When another hates someone who they have never meet only just now sees it is another message that cuts deep against the sole of humanity.
When someone holds a hand out to help another who they do not know who is in deep pain or hungry or oppressed asking nothing in return. It speaks a powerful message of the heart of humanity.
That never once ever required a God or no God its only requirement is to do so.
When one of us is oppressed we all are.
When one of us goes hungry we all go hungry.
History when looked at for its motives intent and reasons is different then just events and dates.
Motives are human intentions the results are the events which follow all being human nature.
A building is built for a reason; people live in a city for a reason a war for a reason language for a reason.
Organizing all of this as a reason and laws have a reason. All of this having been built by human kind has its reasons in human kind.
When early human kind began to form into larger and larger villages then cities there became a need to organize it for self protection from things they know people are capable of doing to each other
Early laws centered on the population judging itself and the ruling class enforcing this to remain in power.
Those early laws centered on common held beliefs religions of various types this to is exampled in human history, examples of doing this so the crops will grow or a rain may fall or a person must to this and live this way or this will happen.
Early religions were the law of which the public judged itself.
This to in evident in the historical record parts, of which still exist in modern law.
This is where those selective abominations religions still have in them were used for back then.
The enforcement of them of the people themselves and the justified reasoning was this might happen because of a God might do this or that.
Yet the motive of reasoning is also a social bigotry or intolerance of people or lifestyles.
This to is exampled in the evolution of modern laws, and the growing knowledge of other people.
The more we know the more we know we are more the same then we are not the same.
What we know for an un-questionable reality which as exampled itself about the human race is that it factually has certain actions and life styles it has which some are completely intolerant of, prejudiced about and discriminating about.
This is witnessed in daily life and throughout its entire existence.
A religion can not exempt itself from this reality by saying a god desires this or that for all people to live like.
A human minister can not claim to not be human by exempting him or herself in selecting parts of humanity as being an abomination and then dismiss this as being some sort of Godly plan.
It can not put into writing such things then claim it never put anything into writing and the writing is gods plan.
It can not claim to have no prejudice then write or explain how anything not living this is wrong?
It can not claim that which is in reality and claim itself to not be a part of this reality.
That is by example extreme evidence of the reality of the human race that does and has certain intolerance of it.
This could not be a pure God belief it is a pure use of such a belief to conform all to one way of existence while denying its intolerances of that existence.
The movement away from many religions is exactly because of its refusal to understand human kind for what human kind really is.
Religions have only themselves to blame and the blame is not upon those whom question the blame is upon those whom refuse to question.
An example which is sometimes pointed out is: The tree you just ran your car into is not the trees fault for having grown in the wrong place.
The only abominations we do to each other are the ones we created. They remain because we have yet to realize the truth of ourselves and have not fully learned to stop doing it.
When the human race finally realizes its own existence, the heaven it has been always living on called earth will finally stop being the hell it created on it.
Organized religions are by their own examples the “use of” a belief for its purpose to conform all to it.
It is not about the belief itself
How could a single celled simple organism produce over millions of years, a complex animal like man (and several other complex organisms in between and along with man)? Have you thought of that? Don’t quote Darwinism as an answer. He described just the mechanism or the conditions that operates and not the real cause that stimulates the evolution. Even genetics is not an answer. It also tells us how it operates and not what operates it!
Again, what is the definition of life? Certainly not what we study in Biology. What we study in biology as the defenition of life is nothing but the properties of life…then what is life? Have you ever thought of that?
• Satish Chandra
Please read the Complexity Series by Dr. Wadhawan. It will answer your questions.
• Evolution by natural selection is a self-organizing phenomenon. In such a phenomenon there are variations and a feedback loop like a video filming itself capturing its own image. A slight variation in the initial image can give rise to beautiful patterns which almost look like they are designed. Try this yourself if you have a webcam. Its amazing! Likewise the feedback loop in Evolution arises via natural selection while the variations are the random mutations. Any self-organizing phenomenon can give rise to complexity.
• Vinod Wadhawan
Anything simple or complex cannot CREATE something more complex than it self. But anything simple or complex can EVOLVE into something more complex than itself. This is DYNAMICAL evolution of any OPEN system, and not just Darwinian evolution. The ultimate cause of all this evolution is that our universe is expanding, and thus creating gradients of all sorts. Nature abhors gradients (second law of thermodynamics). Any gradient is equivalent to negative entropy. And negative entropy means information. As information builds up, complexity increases. As mentioned by Satish Chandra, my series on complexity explains all this in detail.
• The battle over evolution and creation is just the icing on the cake of a much larger question.
That question is not if a god exists or not. The question lays in the why and the motive.
It is what we do with this belief, the god belief.
Of course if this being created everything then the brain is not something to shut off and simply obey this unseen being.
There would be no purpose in building or inventions, or even a religious book since to read it would also require a minds ability to understand it.
The question is the motive in this belief, those so called judgments against others, we know what this is, we know this could not in any way be a god order to do, since this being would or at least since he it assumed it created everything it must therefore know of those human emotions of prejudice, ignorance, bigotry, greed, jealousy and envy.
Any intelligent thinking being would know the results of such a selective view others may have against others not like them self.
The no questions shall be asked that might show something wrong in this belief is evidence of that often seen submission to social peer pressures of conformity, any questioning that might show this belief as wrong must of course be instantly attacked otherwise the rest of this belief might fall into question.
Someone might not have this belief as a justified reason to hold and or excuse a view of others if this idea of a god were shown to be wrong.
Religion is overly littered with those selective ideas of how the whole of the human race must behave, this is far more evidence of the nature of the species known as human kind then it could ever be about a god.
• 2 facts:
big bang theory
charles darwin’s theory of evolution
many mythological religious stories
• Scientist also discovered galaxies older than big bang.
So much for one scientific fact.
• @RKK
Could you provide a source of the above reference, preferably a peer-reviewed publication or formal announcement?
• Massive Ancient Galaxy Stirs Mystery: Is the Universe Older than We Think?
The Daily Galaxy of Discovery channel reported AND HAS GIVEN FULL ASTROPHYSICAL JOURNEL LETTERS reference which is fully pier reviewed. One can google it easily.
What our observations show is that alongside these compact galaxies were other ellipticals that were anything up to 100 times less dense and between two and five times larger – essentially ‘fully grown’ – and much more like the ellipticals we see in the local Universe around us,’ explains Michele Cappellari of Oxford University’s Department of Physics, an author of a report of the research in The Astrophysical Journal Letters
• Atheism and or a questioning of the existence of a God are the results of perhaps these main reasons in no particular order.
1. Its denials of certain realities of nature and or people. Examples being sexuality, the relationship of evolution upon species, its explanations of natural events as godly created and or godly punishments against others such as a hurricane being a judgment.
2. Its refusal of any questioning of it which might in the least way disagree with its doctrine and dogma.
3. Its narrow-mindedness of lifestyles not uncommon or harmful to anyone other then them being something in which it can not tolerant about people.
4. It intolerance of any other type of belief system which does not fully agree with it.
5. Hypocrisy is it teaching of a love of each people while are the same time exampling an intolerance of anything it narrowly defines as a love between people.
Constant interferes in education by attempts to impose itself upon the educational systems of an entire nation.
Its known examples of sexism in its definitions of relationships.
When this constant interferes upon people and or their ways of life are up added then compared to the known examples of prejudice ,ignorance, intolerance and bigotry religion takes on a new it not well known meaning.
Its entire message in its history has been one of social conformity based upon its desires to conform along those lines of excuse via a god belief to give justification.
No one really care if a person believes in this god or not atheist or anyone else, the central issues as always been about those examples.
Those examples are extremely well known to exist in the human race, ignorance, prejudice, intolerance and bigotry and no excuse can be given to explain it away.
Those are factual realities of human kind, the central problem as always been about those reasons and examples.
Atheism is not just I do not belief in any gods, equally so a questioning of the existence of a god as always been about a desire for reason.
Atheism is an acknowledgment of the reality of reality, which is not solely about science verse a faith in a god.
It is also an acknowledgment of the realities of prejudice, ignorance, and intolerance.
Atheism removes the excuses of them and acknowledges the realities of the human races ability to overcome by learning of reality. We are all human, we are all apart of it all the only abominations are and always have been the ones we created.
The mind is not a hat rack, it is the seat of learning, to overcome a problem the problem must first be recognized and not excused.
Atheism is a reason over excuses. Knowledge over ignorance, hope over despair, prejudice removed by learning. A looking before leaping, reasoning over submission.
It does not care one bit if someone believes in a god and cares a great deal about it being forced upon others and the desires of the extremes of religions to deny a freedom of learning.
If there is any question about a religion being about a social conformity, along the lines of a discriminating view of people, it is exampled by its constant default into its moral views of how everyone on the planet must be living, and it excused justification for doing so.
Its endless preaching of the end of the world so only a select few may live afterwards,
in other words the end of all those other types.
How many times have we all seen it going in this direction?
A god belief is fine if it helps someone in some sort of way, it is completely false completely un-ethical completely immoral when it forces a narrow view upon others.
This is based upon its well known history and exampled by those whom use it this way.
This is its greats fault, and this is the main driving force of so many in growing world wide numbers question its motives.
Your first question as to there being no loving god may have been better to ask why there are so many un-loving humans.
Your right there could not be a god behind this or creating this or created it. Human kind created it all. We just happen to be on a planet that evolved a place for it to exist on.
Until we realize the reality of what we have done to each other the excuses will continue to exist.
• You Say:
I say :
That the universe does not exist. Prove it with science that universe exist. “If proved, Also prove me “where does it exist?”
You Say:
How did the universe get created?
I say:
Universe is not created, as universe itself does not exist.
You Say:
I say:
Define “My life”? Do virus have life?
You Say:
I Say:
All of us are nature, Why do you look at nature. Which nature you look at? seperately from yourself.
Your statement:
I Say:
Service to self is better than servicing others. Why do you service others? If everyone services themselves by trying to be good to themselves, then the society will automatically be changed.
Your Satement:
My Statement:
I agree with this partially, “But what is the laws of the universe”?
• Vinod Wadhawan
‘ I say :
That the universe does not exist.’
By this logic, the writer of the comments does not exist. Then to whom do I respond?
• Dear Vinod Wadhawan,
You can respond based on your logic.,
“The universe has a huge amount of information content, or complexity.”
Prove that universe exist
• Prof. Wadhawan has already answered all of your questions. They are the sort of pointless intellectual exercises that pass as “deep” and “profound” in spirituality. Science assumes a few things and then proceeds from there. In fact any knowledge system is based on assumptions. Science makes them known upfront. But mystical/spiritual systems pretend as if they have figured out everything.
So you can keep pretending as if you have asked something profound and earth shattering that needs answering. But we here acknowledge the fact that you can stub your toe on a stone, then go “that’s interesting. I wonder how does that work?”
• I agree with you completely Sir, that Science assumes a few things and then proceeds from there. But the validity of those assumptions may or may not turn out to be true and with further knowledge, what was considered to be true once may melt into falsehood. We have seen this time and time again in the sphere of science.
Then, you go onto state “In fact, any knowledge system is based on assumptions…..”. I know at least one system that does NOT require any assumption. That system asks you ” Do you ever doubt your own existence?”
I hope you will give some thought to this question. Thanks.
• Satish Chandra
That is why I said other systems pretend that they aren’t based on any assumptions. At the minimum, they need to assume that what holds good at time t will hold good at t + 1. So coming to your system, why did you assume that you can finish off the entire question? Everything might go poof the moment you uttered the first word “Do”. What good is a question that can’t even be asked?
• Later findings may result in what was an accepted theory being augmented or modified or rendered applicable only to certain special cases, but does not exactly result in it melting into falsehood. Here are some excerpts from Isaac Asimov’s ‘Relativity of Wrong’ essay.
Nowadays, of course, we are taught that the flat-earth theory is wrong; that it is all wrong, terribly wrong, absolutely. But it isn’t. The curvature of the earth is nearly 0 per mile, so that although the flat-earth theory is wrong, it happens to be nearly right. That’s why the theory lasted so long…..
• Dear Mr. Satish Chandra
I may have assumed that I would have finished off the entire question. But there is no assumption in the entity the presence of which is essential to 1. Recognize that everything has gone poof. 2. Recognize that the question was not completed or completed. Otherwise Questions 1 and 2 will not arise at all. This calls for real understanding of “I”.
You asked – What good is a question that can’t even be asked ? (After everything has gone poof). The very fact that you questioned – is proof that you are affirming the presence of an entity in the presence of which alone can the following recognition can take place – (A) everything has gone poof and (B) the question was not completed.
By the way, don’t you think that your question would have been better posed as “What good is a question, if the questioner himself is gone”? Are you concerned more about the question than the questioner? If one is more concerned about the question rather than the questioner, I am afraid, one is barking up the wrong tree.
If you wish to proceed, please extend to me the courtesy of an answer to my question (without killing the spirit of my question) – Do you ever doubt your own existence. ?
• Satish Chandra
I don’t doubt my own existence. That’s a trivial thing to agree to and lies at the base of any knowledge system because knowledge itself becomes meaningless without human minds. But that is still beholden to the assumption that I (and we) will continue to exist. There’s no escaping that assumption.
And my question is best posed as “What good is anything, if everything goes poof” and not the version you gave (which no doubt is convenient to your purpose). I’m wary of the semantic stopsigns that serve no other purpose than to giving an illusion of having resolved something without actually doing anything of use (Which is what Vedanta is and where your purpose lies).
• The entire battle of science verses religion is a false battle in denial of its falseness.
It misses the target in trying to separate the god idea from the real target which is the un-ethical immorality in the use of this god idea to disguise the intolerance of others and fears of exposing this as it being the motive in beliefs that do selectively judge the realities of human kind.
You will never at least in this time remove someone’s hope of something better then human kind, but you can point out that this is exactly what a religious excuse to judge is and that is a hope of something better that can not tolerate what is here now and is in refusal of its bigotries of what does exist.
You can point out this hope of something better is because someone sees only the bad and not the good and is in a refusal to acknowledge human kind can overcome its bad when it removes the excuse and false reasoning of those who wish to use the bad parts to oppress those good parts that disagree with their views of it all being bad.
The target is bigotry, injustice, selective racial and personal prejudice and seeing reality for what is and acknowledging the ability of the human race to overcome it be exposing those false excuses by beliefs that have kept the human race in fear and denial of itself.
• Mayank Agrawal
Can you write an article (if you have already written one please refer) specifically explaining in simpler terms , about EVOLUTION. Since it is very non intuitive, it is hard to imagine the evolution of single cells into complex organisms, hence your insight might be of great value.
• Dear Dr. Wadhawan,
Nice to see you actively pursuing science of evolution.
It is true that scientists carefully make hypothesis,investigate them, reject if not found to explain the observations made objectively, preferably by instruments, and then make and give a theory.
Also they are happy publishing in scientific journals. And do no extra publicity.
The fact that the so discovered laws are applied to remotest past, (albeit observed and converted to understandable form), is also to be acknowledged.
We are till today ‘alone’ in this universe.
It is understood that conditions similar to ours ‘can’ exist in the millions of solar system in the observable universe. Understood by induction not by direct observation. Which is prohibited by Einstein limit on physical speed. Infact we observe old light (em waves) and apply scientific laws on those observations.
Simple people should keep this always in mind.
Leave a Reply
Comments are moderated. Please see our commenting guidelines |
dc54adc90ee9e37a | Kohn–Sham equations
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In physics and quantum chemistry, specifically density functional theory, the Kohn–Sham equation is the Schrödinger equation of a fictitious system (the "Kohn–Sham system") of non-interacting particles (typically electrons) that generate the same density as any given system of interacting particles.[1][2] The Kohn–Sham equation is defined by a local effective (fictitious) external potential in which the non-interacting particles move, typically denoted as vs(r) or veff(r), called the Kohn–Sham potential. As the particles in the Kohn–Sham system are non-interacting fermions, the Kohn–Sham wavefunction is a single Slater determinant constructed from a set of orbitals that are the lowest energy solutions to
\left(-\frac{\hbar^2}{2m}\nabla^2+v_{\rm eff}(\mathbf r)\right)\phi_{i}(\mathbf r)=\varepsilon_{i}\phi_{i}(\mathbf r)
This eigenvalue equation is the typical representation of the Kohn–Sham equations. Here, εi is the orbital energy of the corresponding Kohn–Sham orbital, φi, and the density for an N-particle system is
\rho(\mathbf r)=\sum_i^N |\phi_{i}(\mathbf r)|^2.
The Kohn–Sham equations are named after Walter Kohn and Lu Jeu Sham (沈呂九), who introduced the concept at the University of California, San Diego in 1965.
Kohn–Sham potential[edit]
In density functional theory, the total energy of a system is expressed as a functional of the charge density as
E[\rho] = T_s[\rho] + \int d\mathbf r\ v_{\rm ext}(\mathbf r)\rho(\mathbf r) + V_{H}[\rho] + E_{\rm xc}[\rho]
where Ts is the Kohn–Sham kinetic energy which is expressed in terms of the Kohn–Sham orbitals as
T_s[\rho]=\sum_{i=1}^N\int d\mathbf r\ \phi_i^*(\mathbf r)\left(-\frac{\hbar^2}{2m}\nabla^2\right)\phi_i(\mathbf r),
vext is the external potential acting on the interacting system (at minimum, for a molecular system, the electron-nuclei interaction), VH is the Hartree (or Coulomb) energy,
V_{H}={e^2\over2}\int d\mathbf r\int d\mathbf{r}'\ {\rho(\mathbf r)\rho(\mathbf r')\over|\mathbf r-\mathbf r'|}.
and Exc is the exchange-correlation energy. The Kohn–Sham equations are found by varying the total energy expression with respect to a set of orbitals to yield the Kohn–Sham potential as
v_{\rm eff}(\mathbf r) = v_{\rm ext}(\mathbf{r}) + e^2\int {\rho(\mathbf{r}')\over|\mathbf r-\mathbf r'|}d\mathbf{r}' + {\delta E_{\rm xc}[\rho]\over\delta\rho(\mathbf r)}.
where the last term
v_{\rm xc}(\mathbf r)\equiv{\delta E_{\rm xc}[\rho]\over\delta\rho(\mathbf r)}
is the exchange-correlation potential. This term, and the corresponding energy expression, are the only unknowns in the Kohn–Sham approach to density functional theory. An approximation that does not vary the orbitals is Harris functional theory.
The Kohn–Sham orbital energies εi, in general, have little physical meaning (see Koopmans' theorem). The sum of the orbital energies is related to the total energy as
E = \sum_{i}^N \varepsilon_i - V_{H}[\rho] + E_{\rm xc}[\rho] - \int {\delta E_{\rm xc}[\rho]\over\delta\rho(\mathbf r)} \rho(\mathbf{r}) d\mathbf{r}
Because the orbital energies are non-unique in the more general restricted open-shell case, this equation only holds true for specific choices of orbital energies (see Koopmans' theorem).
1. ^ Kohn, Walter; Sham, Lu Jeu (1965). "Self-Consistent Equations Including Exchange and Correlation Effects". Physical Review 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K. doi:10.1103/PhysRev.140.A1133.
2. ^ Parr, Robert G.; Yang, Weitao (1994). Density-Functional Theory of Atoms and Molecules. Oxford University Press. ISBN 978-0-19-509276-9. |
944b2484c11e2088 | Electronic band structure
From Wikipedia, the free encyclopedia
(Redirected from Band theory of solids)
Jump to: navigation, search
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes those ranges of energy that an electron within the solid may have (called energy bands, allowed bands, or simply bands), and ranges of energy that it may not have (called band gaps or forbidden bands). Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.).
Why bands and band gaps occur[edit]
Animation for the formation of bands, and how they are filled with electrons in a metal and an insulator.
The existence of continuous bands of allowed energies can be understood starting with the atomic scale. The electrons of a single isolated atom occupy atomic orbitals, which form a discrete set of energy levels. If multiple atoms are brought together into a molecule, their atomic orbitals will combine to form molecular orbitals each with a different energy. In other words, n atomic orbitals will combine to form n molecular orbitals. As more and more atoms are brought together, the molecular orbitals extend larger and larger, and the energy levels of the molecule will become increasingly dense. Eventually, the collection of atoms form a giant molecule, or in other words, a solid. For this giant molecule, the energy levels are so close that they can be considered to form a continuum. (The fineness of spacing required to be considered an effective "continuum" depends on the situation.)
Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve larger and larger orbitals with more overlap, becoming progressively wider and wider at high energy so that there are no band gaps at high energy.
Basic concepts[edit]
Assumptions and limits of band structure theory[edit]
To start out, it is important to note what has been assumed in order to gain the simplicity of the band theory:
1. Infinite-size system: For the bands to be continuous, we must consider a large piece of material. The concept of band structure can be extended to systems which are only "large" along reduced dimensions, such as two-dimensional electron systems.
2. Homogeneous system: The notion of a band structure as an intrinsic property of a material assumes that the material is homogeneous in some way. Practically, this means that band structure describes the bulk inside a uniform piece of material.
3. Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc.
The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory:
• Inhomogeneities and interfaces: Near surfaces, junctions, and other inhomogeneities, the bulk band structure is disrupted. Not only are there local small-scale disruptions (e.g., surface states or dopant states inside the band gap), but also local charge imbalances. These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending).
• Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces. The full description of these effects, in a band structure picture, requires at least a rudimentary model of electron-electron interactions (see space charge, band bending).
• Small systems: For systems which are small along every dimension (e.g., a small molecule or a quantum dot), there is no continuous band structure. The crossover between small and large dimensions is the realm of mesoscopic physics.
• Strongly correlated materials: Some materials (superconductors, mott insulators, and more) simply cannot be understood in terms of single-electron states. The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physics.
Crystalline symmetry and wavevectors[edit]
Brillouin zone of a face-centered cubic lattice showing labels for special symmetry points.
Band structure plot for Si, Ge, GaAs and InAs generated with tight binding model. Note that Si and Ge are indirect band gap materials, while GaAs and InAs are direct.
Main articles: Bloch wave and Brillouin zone
Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch waves as solutions:
where k is called the wavevector. For each value of k, there are multiple solutions to the Schrödinger equation labelled by n, the band index, which simply numbers the energy bands. Each of these energy levels evolves smoothly with changes in k, forming a smooth band of states. For each band we can define a function En(k), which is the dispersion relation for electrons in that band.
The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector space that is related to the crystal's lattice. Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone. Special high symmetry points in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ.
It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, E vs. kx, ky, kz. In scientific literature it is common to see band structure plots which show the values of En(k) for values of k along straight lines connecting symmetry points. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface.
Energy band gaps can be classified using the wavevectors of the states surrounding the band gap:
• Direct band gap: the lowest-energy state above the band gap has the same k as the highest-energy state beneath the band gap.
• Indirect band gap: the closest states above and beneath the band gap do not have the same k value.
Asymmetry: Band structures in non-crystalline solids[edit]
Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band structures.[citation needed] These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials.
Density of states[edit]
Main article: Density of states
The density of states function g(E) is defined as the number of electronic states per unit volume, per unit energy, for electron energies near E.
The density of states function is important for calculations of effects based on band theory. It appears in calculations for optical absorption where it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering.
For energies inside a band gap, g(E) = 0.
Filling of bands[edit]
At thermodynamic equilibrium, the likelihood of a state of energy E being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle:
f(E) = \frac{1}{1 + e^{{(E-\mu)}/{k_{\rm B} T}}}
• kBT is the product of Boltzmann's constant and temperature, and
• µ is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted EF). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter. Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice).
The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states:
N/V = \int_{-\infty}^{\infty} g(E) f(E)\, dE
Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands. The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral. The condition of charge neutrality means that N/V must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting g(E)), until it is at the correct equilibrium with respect to the Fermi level.
Names of bands near the Fermi level (conduction band, valence band)[edit]
A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances.[1] Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert.[2] Likewise, materials have several band gaps throughout their band structure.
The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level. The bands and band gaps near the Fermi level are given special names, depending on the material:
• In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in many semiconductors the valence band is built out of the valence orbitals.
• In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals.[3] The band gaps in a metal's band structure are not important for low energy physics, since they are too far from the Fermi level.
Theory of band structures in crystals[edit]
The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch waves as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors (b1,b2,b3). Now, any periodic potential V(r) which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as:
V(\mathbf{r}) = \sum_{\mathbf{K}}{V_{\mathbf{K}}e^{i \mathbf{K}\cdot\mathbf{r}}}
where K = m1b1 + m2b2 + m3b3 for any set of integers (m1,m2,m3).
From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap.
Nearly free electron approximation[edit]
In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch wavefunction:
{\Psi}_{n,\mathbf{k}} (\mathbf{r}) = e^{i \mathbf{k}\cdot\mathbf{r}} u_n(\mathbf{r})
where the function u_n(\mathbf{r}) is periodic over the crystal lattice, that is,
u_n(\mathbf{r}) = u_n(\mathbf{r-R}) .
Here index n refers to the n-th energy band, wavevector k is related to the direction of motion of the electron, r is the position in the crystal, and R is the location of an atomic site.[4]
The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like Aluminum even gets close to the empty lattice approximation.
Tight binding model[edit]
Main article: Tight binding
The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation \Psi is well approximated by a linear combination of atomic orbitals \psi_n(\mathbf{r}).[5]
\Psi(\mathbf{r}) = \sum_{n,\mathbf{R}} b_{n,\mathbf{R}} \psi_n(\mathbf{r-R}),
where the coefficients b_{n,\mathbf{R}} are selected to give the best approximate solution of this form. Index n refers to an atomic energy level and R refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by:[6][7]
a_n(\mathbf{r-R}) = \frac{V_{C}}{(2\pi)^{3}} \int_{BZ} d\mathbf{k} e^{-i\mathbf{k}\cdot(\mathbf{R-r})}u_{n\mathbf{k}};
in which u_{n\mathbf{k}} is the periodic part of the Bloch wave and the integral is over the Brillouin zone. Here index n refers to the n-th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites R are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the n-th energy band as:
\Psi_{n,\mathbf{k}} (\mathbf{r}) = \sum_{\mathbf{R}} e^{-i\mathbf{k}\cdot(\mathbf{R-r})}a_n(\mathbf{r-R}).
The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations,[8] sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations.
KKR model[edit]
The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
A variational implementation was suggested by Korringa and by Kohn and Rostocker, and is often referred to as the KKR model.[9][10]
Density-functional theory[edit]
In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors.
It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenburg–Kohn theorem.[11] In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopman's theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials.[12]
Green's function methods and the ab initio GW approximation[edit]
To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation.
Mott insulators[edit]
Main article: Mott insulator
Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean field theory, which bridges the gap between the nearly free electron approximation and the atomic limit.
Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following:
• Empty lattice approximation: the "band structure" of a region of free space that has been divided into a lattice.
• k·p perturbation theory is a technique that allows a band structure to be approximately described in terms of just a few parameters. The technique is commonly used for semiconductors, and the parameters in the model are often determined by experiment.
• The Kronig-Penney Model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative.
• Hubbard model
The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces.
Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl).
Band Diagrams[edit]
To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other.
1. ^ High-energy bands are important for electron diffraction physics, where the electrons can be injected into a material at high energies, see Stern, R.; Perry, J.; Boudreaux, D. (1969). "Low-Energy Electron-Diffraction Dispersion Surfaces and Band Structure in Three-Dimensional Mixed Laue and Bragg Reflections". Reviews of Modern Physics 41 (2): 275. Bibcode:1969RvMP...41..275S. doi:10.1103/RevModPhys.41.275. edit.
2. ^ Low-energy bands are however important in the Auger effect.
3. ^ In copper, for example, the effective mass is a tensor and also changes sign depending on the wave vector, as can be seen in the de Haas–van Alphen effect; see http://www.phys.ufl.edu/fermisurface/
4. ^ Kittel, p. 179
5. ^ Kittel, pp. 245-248
6. ^ Kittel, Eq. 42 p. 267
7. ^ Daniel Charles Mattis (1994). The Many-Body Problem: Encyclopaedia of Exactly Solved Models in One Dimension. World Scientific. p. 340. ISBN 981-02-1476-6.
8. ^ Walter Ashley Harrison (1989). Electronic Structure and the Properties of Solids. Dover Publications. ISBN 0-486-66021-4.
9. ^ Joginder Singh Galsin (2001). Impurity Scattering in Metal Alloys. Springer. Appendix C. ISBN 0-306-46574-4.
10. ^ Kuon Inoue, Kazuo Ohtaka (2004). Photonic Crystals. Springer. p. 66. ISBN 3-540-20559-4.
11. ^ Hohenberg, P; Kohn, W. (Nov 1964). "Inhomogeneous Electron Gas". Phys. Rev. 136 (3B): B864––B871. Bibcode:1964PhRv..136..864H. doi:10.1103/PhysRev.136.B864.
12. ^ Paier, J.; Marsman, M.; Hummer, K.; Kresse, G.; Gerber, IC.; Angyán, JG. (Apr 2006). "Screened hybrid density functionals applied to solids.". J Chem Phys 124 (15): 154709. Bibcode:2006JChPh.124o4709P. doi:10.1063/1.2187006. PMID 16674253.
Further reading[edit]
1. Microelectronics, by Jacob Millman and Arvin Gabriel, ISBN 0-07-463736-3, Tata McGraw-Hill Edition.
2. Solid State Physics, by Neil Ashcroft and N. David Mermin, ISBN 0-03-083993-9
3. Elementary Solid State Physics: Principles and Applications, by M. Ali Omar, ISBN 0-201-60733-6
4. Electronic and Optoelectronic Properties of Semiconductor Structures - Chapter 2 and 3 by Jasprit Singh, ISBN 0-521-82379-X
5. Electronic Structure: Basic Theory and Practical Methods by Richard Martin, ISBN 0-521-78285-6
6. Condensed Matter Physics by Michael P. Marder, ISBN 0-471-17779-2
7. Computational Methods in Solid State Physics by V V Nemoshkalenko and N.V. Antonov, ISBN 90-5699-094-2
8. Elementary Electronic Structure by Walter A. Harrison, ISBN 981-238-708-0
9. Pseudopotentials in the theory of metals by Walter A. Harrison, W.A. Benjamin (New York) 1966
10. Tutorial on Bandstructure Methods by Dr. Vasileska (2008)
External links[edit]
See also[edit] |
0526fa856311860d | Take the 2-minute tour ×
I am trying to understand how complex numbers made their way into QM. Can we have a theory of the same physics without complex numbers? If so, is the theory using complex numbers easier?
share|improve this question
Complex numbers are fundamental and natural and QM can't work without them - or without a contrived machinery that imitates them. The commutator of two hermitian operators is anti-hermitian, so e.g. [x,p] when it's a c-number has to be imaginary. That's why either x or p or both have to be complex matrices - have complex matrix elements. Schrodinger's equation and/or path integral needs an $i$, too, to produce $\exp(i\omega t)$ waves with the clear direction-sign etc. See motls.blogspot.cz/2010/08/… – Luboš Motl Jul 20 '12 at 5:14
Quite on the contrary, Dushya. Mathematically, complex numbers are much more fundamental than any other number system, smaller or greater. That's also linked to a theorem that happens to be called the fundamental theorem of algebra, en.wikipedia.org/wiki/Fundamental_theorem_of_algebra - because it is fundamental - that says that n-th order polynomials have n roots but only if everything is in the complex realm. You say that complex numbers may be emulated by real numbers. But it's equally true - and more fundamental - that real numbers may be emulated by complex ones. – Luboš Motl Jul 20 '12 at 9:43
There are no number in Nature at all... – Kostya Jul 20 '12 at 16:43
@Dushya ... the reference "fundamental" here for math is the fact that $\mathbb{C}$ is a field extension of $\mathbb{R}$, and not the other way around. There is nothing more to be said about this. – Chris Gerig Jul 20 '12 at 17:40
In principle you can also use $2\times2$ matrices in the form $\begin{pmatrix} x & y \\ -y & x \end{pmatrix}$. (this remark is in the spirit of Steve B's answer) – Fabian Aug 15 '12 at 17:37
show 9 more comments
10 Answers
up vote 5 down vote accepted
The nature of complex numbers in QM turned up in a recent discussion, and I got called a stupid hack for questioning their relevance. Mainly for therapeutic reasons, I wrote up my take on the issue:
On the Role of Complex Numbers in Quantum Mechanics
It has been claimed that one of the defining characteristics that separate the quantum world from the classical one is the use of complex numbers. It's dogma, and there's some truth to it, but it's not the whole story:
While complex numbers necessarily turn up as first-class citizen of the quantum world, I'll argue that our old friend the reals shouldn't be underestimated.
A bird's eye view of quantum mechanics
In the algebraic formulation, we have a set of observables of a quantum system that comes with the structure of a real vector space. The states of our system can be realized as normalized positive (thus necessarily real) linear functionals on that space.
In the wave-function formulation, the Schrödinger equation is manifestly complex and acts on complex-valued functions. However, it is written in terms of ordinary partial derivatives of real variables and separates into two coupled real equations - the continuity equation for the probability amplitude and a Hamilton-Jacobi-type equation for the phase angle.
The manifestly real model of 2-state quantum systems is well known.
Complex and Real Algebraic Formulation
Let's take a look at how we end up with complex numbers in the algebraic formulation:
We complexify the space of observables and make it into a $C^*$-algebra. We then go ahead and represent it by linear operators on a complex Hilbert space (GNS construction).
Pure states end up as complex rays, mixed ones as density operators.
However, that's not the only way to do it:
We can let the real space be real and endow it with the structure of a Lie-Jordan-Algebra. We then go ahead and represent it by linear operators on a real Hilbert space (Hilbert-Schmidt construction).
Both pure and mixed states will end up as real rays. While the pure ones are necessarily unique, the mixed ones in general are not.
The Reason for Complexity
Even in manifestly real formulations, the complex structure is still there, but in disguise:
There's a 2-out-of-3 property connecting the unitary group $U(n)$ with the orthogonal group $O(2n)$, the symplectic group $Sp(2n,\mathbb R)$ and the complex general linear group $GL(n,\mathbb C)$: If two of the last three are present and compatible, you'll get the third one for free.
An example for this is the Lie-bracket and Jordan product: Together with a compatibility condition, these are enough to reconstruct the associative product of the $C^*$-algebra.
Another instance of this is the Kähler structure of the projective complex Hilbert space taken as a real manifold, which is what you end up with when you remove the gauge freedom from your representation of pure states:
It comes with a symplectic product which specifies the dynamics via Hamiltonian vector fields, and a Riemannian metric that gives you probabilities. Make them compatible and you'll get an implicitly-defined almost-complex structure.
Quantum mechanics is unitary, with the symplectic structure being responsible for the dynamics, the orthogonal structure being responsible for probabilities and the complex structure connecting these two. It can be realized on both real and complex spaces in reasonably natural ways, but all structure is necessarily present, even if not manifestly so.
Is the preference for complex spaces just a historical accident? Not really. The complex formulation is a simplification as structure gets pushed down into the scalars of our theory, and there's a certain elegance to unifying two real structures into a single complex one.
On the other hand, one could argue that it doesn't make sense to mix structures responsible for distinct features of our theory (dynamics and probabilities), or that introducing un-observables to our algebra is a design smell as preferably we should only use interior operations.
While we'll probably keep doing quantum mechanics in terms of complex realizations, one should keep in mind that the theory can be made manifestly real. This fact shouldn't really surprise anyone who has taken the bird's eye view instead of just looking throught the blinders of specific formalisms.
share|improve this answer
add comment
I am not very well versed in the history, but I believe that people doing classical wave physics had long since notes the close correspondence between the many $\sin \theta$s and $\cos \theta$s flying around their equations and the behavior of $e^{i \theta}$. In fact most wave related calculation can be done with less hassle in the exponential form.
Then in the early history of quantum mechanics we find things described in terms of de Broglie's matter waves.
And it works which is really the final word on the matter.
Finally, all the math involing complex numbers can be decomposed into compound operations on real numbers so you can obviously re-formulate the theory in those terms there is no reason to think that you will gain anything in terms of ease or insight.
share|improve this answer
Can a complex infinite dimensional Hilbert space be written as a real Hilbert space with complex structure ? It seems plausible that it can be done but could there be any problems due to infinite dimensionality ? – user10001 Jul 20 '12 at 2:45
The underlying field you choose, $\mathbb{C}$ or $\mathbb{R}$, for your vector space probably has nothing to do with its dimensionality. – Frank Jul 20 '12 at 2:52
@dushya: There are no problems due to infinite dimensionality, the space is separable and can be approximated by finite dimensional subspaces. – Ron Maimon Jul 20 '12 at 18:37
add comment
Complex numbers "show up" in many areas such as, for example, AC analysis in electrical engineering and Fourier analysis of real functions.
The complex exponential, $e^{st},\ s = \sigma + i\omega$ shows up in differential equations, Laplace transforms etc.
Actually, it just shouldn't be all that surprising that complex numbers are used in QM; they're ubiquitous in other areas of physics and engineering.
And yes, using complex numbers makes many problems far easier to solve and to understand.
I particularly enjoyed this book (written by an EE) which gives many enlightening examples of using complex numbers to greatly simplify problems.
share|improve this answer
I guess I'm wondering if those complex numbers are "intrinsic" or just an arbitrary computing device that happens to be effective. – Frank Jul 20 '12 at 2:56
@Frank: you could ask the same thing about the real numbers. Who ever measured anything to be precisely $\sqrt 2$ meters, anyhow? – Niel de Beaudrap Jul 20 '12 at 4:29
What does it mean though that complex numbers "appear" in AC circuit analysis? The essence of AC is a sinusoidal driving components. You could say the nature of these components come from geometry factors, made electrical from a dot product in generators. Once we have sinusoidal variables interacting in an electrical circuit, we know the utility of complex numbers. That, in turn, comes from the equations. What does that all mean though? – AlanSE Jul 20 '12 at 13:36
It means that if the sources in the circuit are all of the form $e^{st}$, the voltages and currents in the circuit will be of that form. This follows from the nature of the differential equations that represent the circuit. The fact that we choose to set $s = j\omega$ for AC analysis and then select only the real part of the solutions as a "reality" constraint doesn't change the mathematical fact that the differential equations describing the circuit have complex exponential solutions. – Alfred Centauri Jul 20 '12 at 13:49
Alan - it probably means nothing. It happens to be a tool that so far works pretty well. – Frank Jul 20 '12 at 15:55
show 3 more comments
Update: This answer has been superseded by my second one. I'll leave it as-is for now as it is more concrete in some places. If a moderator thinks it should be deleted, feel free to do so.
I do not know of any simple answer to your question - any simple answer I have encountered so far wasn't really convincing.
Take the Schrödinger equation, which does contain the imaginary unit explicitly. However, if you write the wave function in polar form, you'll arrive at a (mostly) equivalent system of two real equations: The continuity equation together with another one that looks remarkably like a Hamilton-Jacobi equation.
Then there's the argument that the commutator of two observables is anti-hermitian. However, the observables form a real Lie-algebra with bracket $-i[\cdot,\cdot]$, which Dirac calls the quantum Poisson bracket.
All expectation values are of course real, and any state $\psi$ can be characterized by the real-valued function $$ P_\psi(·) = |\langle \psi,·\rangle|^2 $$
For example, the qubit does have a real description, but I do not know if this can be generalized to other quantum systems.
I used to believe that we need complex Hilbert spaces to get a unique characterization of operators in your observable algebra by their expectation values.
In particular, $$ \langle\psi,A\psi\rangle = \langle\psi,B\psi\rangle \;\;\forall\psi \Rightarrow A=B $$ only holds for complex vector spaces.
Of course, you then impose the additional restriction that expectation values should be real and thus end up with self-adjoint operators.
For real vectors spaces, the latter automatically holds. However, if you impose the former condition, you end up with self-adjoint operators as well; if your conditions are real expectation values and a unique representation of observables, there's no need to prefer complex over real spaces.
The most convincing argument I've heard so far is that linear superposition of quantum states doesn't only depend on the quotient of the absolute values of the coefficients $|α|/|β|$, but also their phase difference $\arg(α) - \arg(β)$.
Update: There's another geometric argument which I came across recently and find reasonably convincing: The description of quantum states as vectors in a Hilbert space is redundant - we need to go to the projective space to get rid of this gauge freedom. The real and imaginary parts of the hermitian product induce a metric and a symplectic structure on the projective space - in fact, projective complex Hilbert spaces are Kähler manifolds. While the metric structure is responsible for probabilities, the symplectic one provides the dynamics via Hamilton's equations. Because of the 2-out-of-3 property, requiring the metric and symplectic structures to be compatible will get us an almost-complex structure for free.
share|improve this answer
You don't need polar form, just take the real and imaginary parts. – Ron Maimon Jul 20 '12 at 10:00
The most convincing I've heard so far is that since there are "waves" in QM, complex numbers formulation happen to be convenient and efficient. – Frank Jul 20 '12 at 15:56
add comment
If you don't like complex numbers, you can use pairs of real numbers (x,y). You can "add" two pairs by (x,y)+(z,w) = (x+z,y+w), and you can "multiply" two pairs by (x,y) * (z,w) = (xz-yw, xw+yz). (If don't think that multiplication should work that way, you can call this operation "shmultiplication" instead.)
Now you can do anything in quantum mechanics. Wavefunctions are represented by vectors where each entry is a pair of real numbers. (Or you can say that wavefunctions are represented by a pair of real vectors.) Operators are represented by matrices where each entry is a pair of real numbers, or alternatively operators are represented by a pair of real matrices. Shmultiplication is used in many formulas. Etc. Etc.
I'm sure you see that these are exactly the same as complex numbers. (see Lubos's comment: "a contrived machinery that imitates complex numbers") They are "complex numbers for people who have philosophical problems with complex numbers". But it would make more sense to get over those philosophical problems. :-)
share|improve this answer
+1 on schmultiplication – Emilio Pisanty Jul 20 '12 at 13:31
But doesn't just change his question to "QM without shmultiplication"? – Alfred Centauri Jul 20 '12 at 14:00
I do like complex numbers a lot. They are extremely useful and convenient, in connection to the fundamental theorem of algebra, for example, or when working with waves. I'm just trying to understand. – Frank Jul 20 '12 at 15:30
Alfred - yes. That would be the point. I was wondering if there could be, I don't know, a matrix formulation of the same physics that would use another tool (matrices) than complex numbers. Again, I have no problem with complex numbers and I love them. – Frank Jul 20 '12 at 15:53
also note that you can model QM on a space of states on a sphere in $\mathbb{C}^n$ with radius $|x|^2+|y|^2+...=1$. These spheres have dimension $2n$ for the reals. – kηives Jul 20 '12 at 16:53
show 2 more comments
Just to put complex numbers in context, A.A. Albert edited "Studies in Modern Algebra" - from the Mathematical Assn of America. C is one of the Normed Division Algebras - of which there are only four: R,C,H and O. One can do a search for "composition algebras" - of which C is one.
share|improve this answer
add comment
The complex numbers in quantum mechanics are mostly a fake. They can be replaced everywhere by real numbers, but you need to have two wavefunctions to encode the real and imaginary parts. The reason is just because the eigenvalues of the time evolution operator $e^{iHt}$ are complex, so the real and imaginary parts are degenerage pairs which mix by rotation, and you can relabel them using i.
The reason you know i is fake is that not every physical symmetry respects the complex structure. Time reversal changes the sign of "i". The operation of time reversal does this because it is reversing the sense in which the real and imaginary parts of the eigenvectors rotate into each other, but without reversing the sign of energy (since a time reversed state has the same energy, not negative of the energy).
This property means that the "i" you see in quantum mechanics can be thought of as shorthand for the matrix (0,1;-1,0), which is algebraically equivalent, and then you can use real and imaginary part wavefunctions. Then time reversal is simple to understand--- it's an orthogonal transformation that takes i to -i, so it doesn't commute with i.
The proper way to ask "why i" is to ask why the i operator, considered as a matrix, commutes with all physical observables. In other words, why are states doubled in quantum mechanics in indistinguishable pairs. The reason we can use it as a c-number imaginary unit is because it has this property. By construction, i commutes with H, but the question is why it must commute with everything else.
One way to understand this is to consider two finite dimensional systems with isolated Hamiltonians $H_1$ and $H_2$, with an interaction Hamiltonian $f(t)H_i$. These must interact in such a way that if you freeze the interaction at any one time, so that $f(t)$ rises to a constant and stays there, the result is going to be a meaningful quantum system, with nonzero energy. If there is any point where $H_i(t)$ doesn't commute with the i operator, there will be energy states which cannot rotate in time, because they have no partner of the same energy to rotate into. Such states must be necessarily of zero energy. The only zero energy state is the vacuum, so this is not possible.
You conclude that any mixing through an interaction hamiltonian between two quantum systems must respect the i structure, so entangling two systems to do a measurement on one will equally entangle with the two state which together make the complex state.
It is possible to truncate quantum mechanics (at least for sure in a pure bosnic theory with a real Hamiltonian, that is, PT symmetric) so that the ground state (and only the ground state) has exactly zero energy, and doesn't have a partner. For a bosonic system, the ground state wavefunction is real and positive, and if it has energy zero, it will never need the imaginary partner to mix with. Such a truncation happens naturally in the analytic continuation of SUSY QM systems with unbroken SUSY.
share|improve this answer
add comment
Frank, I would suggest buying or borrowing a copy of Richard Feynman's QED: The Strange Theory of Light and Matter. Or, you can just go directly to the online New Zealand video version of the lectures that gave rise to the book.
In QED you will see how Feynman dispenses with complex numbers entirely, and instead describes the wave functions of photons (light particles) as nothing more than clock-like dials that rotate as they move through space. In a book-version footnote he mentions in passing "oh by the way, complex numbers are really good for representing the situation of dials that rotate as they move through space," but he intentionally avoids making the exact equivalence that is tacit or at least implied in many textbooks. Feynman is quite clear on one point: It's the rotation-of-phase as you move through space that is the more fundamental physical concept for describing quantum mechanics, not the complex numbers themselves.[1]
I should be quick to point out that Feynman was disrespecting the remarkable usefulness of complex numbers for describing physical phenomena. Far from it! He was fascinating for example by the complex-plane equation known as Euler's Identity, $e^{i\pi} = -1$ (or, equivalently, $e^{i\pi} + 1 = 0$), and considered it one of the most profound equations in all of mathematics.
It's just that Feynman in QED wanted to emphasize the remarkable conceptual simplicity of some of the most fundamental concepts of modern physics. In QED for example, he goes on to use his little clock dials to show how in principle his entire method for predicting the behavior of electrodynamic fields and systems could be done using such moving dials.
That's not practical of course, but that was never Feynman's point in the first place. His message in QED was more akin to this: Hold on tight to simplicity when simplicity is available! Always build up the more complicated things from that simplicity, rather than replacing simplicity with complexity. That way, when you see something horribly and seemingly unsolvable, that little voice can kick in and say "I know that the simple principle I learned still has to be in this mess, somewhere! So all I have to do is find it, and all of this showy snowy blowy razzamatazz will disappear!"
[1] Ironically, since physical dials have a particularly simple form of circular symmetry in which all dial positions (phases) are absolutely identical in all properties, you could argue that such dials provide a more accurate way to represent quantum phase than complex numbers. That's because as with the dials, a quantum phase in a real system seems to have absolutely nothing at all unique about it -- one "dial position" is as good as any other one, just as long as all of the phases maintain the same positions relative to each other. In contrast, if you use a complex number to represent a quantum phase, there is a subtle structural asymmetry that shows up if you do certain operations such as squaring the number (phase). If you do that do a complex number, then for example the clock position represented by $1$ (call it 3pm) stays at $1$, while in contrast the clock position represented by $-1$ (9pm) turns into a $1$ (3pm). This is no big deal in a properly set up equation, but that curious small asymmetry is definitely not part of the physically detectable quantum phase. So in that sense, representing such a phase by using a complex number adds a small bit of mathematical "noise" that is not in the physical system.
share|improve this answer
add comment
Yes, we can have a theory of the same physics without complex numbers (without using pairs of real functions instead of complex functions), at least in some of the most important general quantum theories. For example, Schrödinger (Nature (London) 169, 538 (1952)) noted that one can make a scalar wavefunction real by a gauge transform. Furthermore, surprisingly, the Dirac equation in electromagnetic field is generally equivalent to a fourth-order partial differential equation for just one complex component, which component can also be made real by a gauge transform (http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf (an article published in the Journal of Mathematical Physics) or http://arxiv.org/abs/1008.4828 ).
share|improve this answer
add comment
Let the old master Dirac speak:
So if I interpret Dirac right, the use of complex numbers helps to distinguish between quantities, that can be measured simultaneously and the one which can't. You would loose that feature, if you would formulate QM purely with real numbers.
share|improve this answer
@asmaier: I looked at the quote in the book,and I tend to interpret it as follows: in a general case, it is not possible to measure a complex dynamical variable. So I don't quite understand how you make your conclusion: "the use of complex numbers helps to distinguish between quantities, that can be measured simultaneously and the one which can't" – akhmeteli Nov 3 '13 at 15:45
I'm not sure if that is a good example, but think about the wave function described by Schrödingers equation. One could split Schrödingers equation into two coupled equations, one for the real and one for the imaginary part of the wave function. However one cannot measure the phase and the amplitude of the wave function simultaneously, because both measurements interfere with each other. To make this manifest, one uses a single equation with a complex wave function, and generates the observable real quantity by squaring the complex wave function. – asmaier Nov 3 '13 at 16:23
@asmaier: I still don't quite see how this supports your conclusion that I quoted. By the way, as you mentioned the Schrödinger equation, you might wish to see my answer to the question. – akhmeteli Nov 3 '13 at 18:47
show 2 more comments
protected by Qmechanic Nov 27 '13 at 15:42
|
fc0453936afa9bf6 | Human moleculeThis is a featured page
Human Molecules (1910)
Example of an early 20th-century style human molecule themed article, entitled "Human Molecules" (1910), by American philosopher Mary Mesny, in which she defines a person as an atom or molecule and outlines a simple human chemical bonding theory modeled on affinity bonding (valences) of atoms. [69]
See also: Human molecule (Wikipedia); Human molecule (banned)
In human chemistry, the human molecule is the atomic definition of a person. [1] The following 26-element formula is the latest calculation (2007) of the molecular formula for a typical 70kg (154lb) person: [2]
where EN, e.g. E22, means exponent to the power of ten. A 22-element empirical formula for a human was first calculated in 2000 by American limnologists Robert Sterner and James Elser. [11] A molecule, according to the 1649 coining of the term by French thinker Pierre Gassendi, is structure of two or more connected atoms, and a person, according to functional mass composition data, is comprised of twenty-six types of elements (atoms); subsequently the term 'human molecule' is the scientific name for the chemical definition of one human. In this sense, from the perspective of chemical reactions between people, as captured in the motto "love the chemical reaction", such as in a couple forming reaction:
A + B AB (combination reaction)
the reactants (A + B) and product (AB) in the human chemical reaction are technically "molecules" no different than any other molecule in the universe. The union of two molecules, AB, in this example, would be termed a dihumanide molecule, i.e. two human molecules chemically bonded. There do exist, to note, many characteristic differences between complex, multi-element human molecules, and other simpler molecules, such as H20, one prominent difference being that there exists a metabolic effect or atomic turnover rate in the body of the human molecules. Synonyms to the term 'human molecule' include: chemical species, human particles, human element, social atoms, human atomism, etc., depending on the framework of study. The human structure is no exemption.
Periodic table (elements of the human molecule)
Functional elements (highlighted), from hydrogen (smallest) to iodine (largest), in the human molecule, according to 2002-2007 research of engineer Libb Thims, as shown (hyperlinked) on the hmolscience periodic table. [1]
Elements: 26 atoms in the human body
There are 92 types of atoms, naturally occurring, in the volumetric region of the earth. Each type of atom is characterized by the number of protons in its nucleus, the number being representative of the name of the element the atom is, a number which varies from one to one-hundred-and-eighteen. Hydrogen, symbol H, containing one proton, is the smallest type of atom. Helium, symbol He, containing two protons, is the next largest type of atom. A bound state structure of atoms is what is called a molecule. The human being is one such bound state structure. The number of elements said to be actively-functional in the composition of the human varies from 22 to 28 depending, on source.
Etymology | 1789
The English "human molecule" originated in the French version of the the term molécule humaine. The earliest documented use of the term ‘molécules humaines’, discovered thus far, is found in the 1789 edition of the multi-volume treatise Philosophy of Nature by French philosopher Jean Sales who uses the term 'human molecule', functionally, by stating: [46]
Jean Sales
In 1798, French polymath Jean Sales coined the term 'molécule humaine' or human molecule (English).
“We conclude that [there exists] a principle of the human body [which] comes from the great [process] [in which] so many millions of atoms of the earth become many millions of human molecules.”
This French origin has to do with the fact that the term ‘molécule’ itself originated in France, supposedly first used in either the circa 1620 works of French philosopher Rene Descartes, who is said to have used the term to mean a small mass, or the 1649 work of French thinker Pierre Gassendi, who used the term molecule in the sense of the attachment of two or more atoms. The first English usage of the term ‘human molecule’ seems to come from the the 1855 English translation of French composer Hector Berloiz 1854 book Evenings with the Orchestra, in the original French version of which Berloiz used the term ‘molécule humaine’ referring to children in choir. Prior to (and after) this usage by Berloiz, however, there seems to exist a large, yet undocumented, usage of the term molécule humaine in French publications, e.g. Alphonse Esquiros (1840), Yves Guyot (1903), etc.
Human gas particle (diagram)
1999 artistic rendition of the human particle (human atom or human atomism) view of people conceived as Daniel Bernoulli-style kinetic theory gas particles . [67]
The first use of the term human molecule, used in a semi-modern scientific sense, is the 1869 "psychology of the human molecule" usage by French historian Hippolyte Taine, a usage later adopted by those including: German physician Ernst Gryzanowski (1875), American historian Henry Adams (1885), and French education theorist Max Leclere (1894). The usage by Adams would carry through to influence others, such as American sociologist Robert Nisbet; Nisbet, for instance, employed the term 'social molecule', to refer to the attachment of two or more 'human particles', as he called them, affixed together by a 'social bond', a subject about which he wrote a book (1970).
Sciences: physics, chemistry, thermodynamics
The sciences that study the human molecule can be divided into three general groups: human physics, human chemistry, and human thermodynamics. Human physics tends to concern itself with the forces that act on human molecules, often times modeling the human molecule as a particle, i.e. a human atom, social atom, or human particle.
Human chemistry tends to concern the application of the principles of chemistry, particularly chemical bonding, collision theory, activation energy, molecular orbital theory, etc., to interactions of human molecules and the structures they form. Human thermodynamics tends to study boundaried sets or "systems" of interactive human molecules, which constitute thermodynamic systems, i.e. working bodies, according to the laws and principles of thermodynamics.
Human molecule diagram
Human molecular formula diagram from chapter "The Human Molecule" in Human Chemistry (2007) by Libb Thims.
Short history
The earliest views of what the "human being" is include French philosopher Rene Decartes' 1637 animal machine hypothesis (see: human machine), the human motor view in the 18th century, German polymath Johann von Goethe's view (see: Goethe's human chemistry) of people as a type of reactive chemical species, and English chemist Humphry Davy's 1813 point atom view of man. [6]
To complicate matter, in 1869 Russian chemist Dmitri Mendeleyev had famously arranged the total 66-known elements at the time into a periodic table, listed in order of atomic weights, in such a manner that their properties repeated in a series of periodic intervals. [3] Following this point in history, it was beginning to become apparent that the human being may be a type of molecule.
It soon became apparent that a human being may have a molecular formula in relation to these elements. The first to state this explicitly was American physician George Carey, who in his 1919 book Chemistry of Human Life, stated that "man's body is a chemical formula in operation." [5]
The first calculations for the empirical molecular formula for the human were made independently in 2000 by American limnologists Robert Sterner and James Elser (22-element formula), in 2002 by American chemical engineer Libb Thims (26-element formula), and in 2005 by New Scientist magazine (12-element formula). [12] In modern atomic detail, according to the most up-do-date mass composition estimates, the human being is a twenty-six element molecule, as shown pictured.
Which molecule has free will, is alive, is moral, has a brain?
(an animate molecule)
Retinal Molecule
(an animate molecule)
Human molecule (baby) (300px)
The "forced" input of a single photon (a force carrier) causes the three-element retinal molecule to "move" into a straightened position; when the light is no longer present, the retinal molecule reverts back to the bent position. The "forced" input of a billions of photons (force carriers) causes the twenty-six-element human molecule to "move" into a straightened upright position; when the light is no longer present (e.g. nighttime), the human molecule reverts back to its bent position (e.g. curled in sleep).
Free will
In discussions on the idea of the person as a "molecule" the topic of free will, i.e. the conception that a person exercises control over the choices made in life, often comes to the fore. Russian bioelectrochemist Octavian Ksenzhek tells us in 2007 that "people are the molecules of which an economy consists", but also clarifies, in the context of water molecules forming frost on glass, that: [31]
“Molecules have neither free will nor any will at all.”
Ksenzhek goes on to state that “all a molecule can do is repel elastically from other chaotically moving molecules and sometimes, very seldom, lose some of its degrees of freedom and freeze in a lager collective.” Certainly there is a difference between water molecules and human molecules, but, nevertheless, the concept of 'free will' becomes defunct, each is considered as a molecule, pure and simple. Many, however, will not admit to this. In 1952, English physicist C.G. Darwin argued that humans are molecules governed by the laws of thermodynamics, but also conjectured that 'human molecules' have free will owing to their 'unpredictability'. [4] This, of course, is incorrect. Nobody in the history of science has every found a molecule in possession of a free will. [26]
To clarify, in modern human chemistry and human thermodynamics, a human being is defined as a molecule, i.e. a "human molecule", and systems of humans are defined as thermodynamics systems, governed by the laws of chemistry and physics. In this view, the conception of a molecule, human or otherwise, with a free will, becomes an absurdity. The modern view, conversely, shows the concept of free will to be a defunct scientific theory, replaced by more updated views, such as induced movement, or more generally the view of human chemical reactions governed by the spontaneity criterion, activation energy, collision theory, free energy coupling between human chemical bonds, among other basic concepts of chemistry applied to human movements.
Walking molecule (human atomic stick figure)
Humorous depiction of a human-like 'walking molecule' from the 2009 NY Times article "Experiments Show That Molecules Can Walk" by Henry Fountain. [54]
Walking molecules
See main: walking molecule, molecular carrier, molecular spider, and molecular car
It is often a neglected fact that humans are molecules that walk, run, and sometimes fly, on or above a 'surface', which from a chemical-definitions sense can be defined as either substrate or catalyst, depending on the context of discussion, which varies depending on subject mode: surface chemistry, surface physics, or surface thermodynamics. In this perspective, an intuitive way to better come to understand human behavior (movement and reactions) is to use the conception or reality that humans are 'walking molecules' on a surface and, using this perspective, study the behaviors and operation of smaller nano-size 'walking molecules'. The first operational walking molecules were developed in 2005 by German-born American physical chemist Ludwig Bartels at the University of California Riverside designed a molecule, called 9,10-dithioanthracene (DTA), that can walk in a straight line on a flat surface, like a little person.
Recent years have seen spin-off varieties, such as: molecular cars (2005), molecular carriers (2007), and DNA-based, four-legged molecular spiders (2010), as well as among others. One interesting recent design (2009)
a 21-atom, 2-legged, track-affixed walker, that moves down the track, step-by-step, when its environment oscillates between basic and acidic. [55] Here we can extrapolate to understand how humans move differently depending on how their environment oscillates, factors including: temperature and pressure, as well as more abstract oscillations, such as terrorism, war, or famine, etc. Findings of these various walkers include the fact that more fuel (or energy) is needed when walkers carry a load, that most molecular walkers need some help, in the form of chemicals to keep them going, and that they tend to wander.
Walking molecule (two-legged)
A two-legged 21-atom walking molecule (red) on a track, on which it walks when its environment oscillates between basic to acidic. [55]
When confronted with the question of what is the difference between a walking 'human molecule' (person) and a nanosized 'walking molecule' (such as DTA), the person new to this mode of logic (even hardened non-religious scientists) will bring up antiquated objections, such as: a human is different because he or she has a brain, has consciousness, has free will, can choose there actions, is alive, among others nonsensical objections. The religious-type person will quickly bring up the 7,000-year-old theory that 'a human being as a soul' (or spirit), which is beyond the definitions of science of atoms.
These few examples highlight the precipice of the revolution in human thinking that must take place, in the years to follow, in order to bring universal acceptance to the logic that humans are molecules that chemically react together on a surface driven by solar heat, a view which defines the science of human chemistry.
Hector BerloizHippolyte Taine (young)
French composer Hector Berlioz used the term “human molecule” in 1854, albeit, it seems, rather metaphorically. French philosopher Hippolyte Taine , independently used the 'human molecule', in a scientific sense, in 1869, which he considered the central subject of study for both the psychologist and the historian.
1854: Berloiz
On the heels of the earlier 1798 usage of the 'human molecule' conception by French philosopher Jean Sales, in 1854 French composer Hector Berlioz used the term “molécules humaines”, which was translated the following year (in English) as ‘human molecules’, albeit in a rather poetic or artistic way. Berloiz used the term rather superficially, referred to filling up of the boys and girls in the multi-leveled amphitheater of St. Paul’s Cathedral as being similar to the phenomenon of crystallization, a phenomenon which he had viewed microscopically previously. He states: [38]
“The points of this crystal of human molecules, constantly directed from the circumference towards the center, was bi-colored—the dark blue of the little boys’ coats on the upper stages, and the white of the little girls’ frocks and caps occupying the lower ranks. Besides this, as the boys wore either a polished brass badge or silver medal, their movements caused the light reflected by these metal ornaments to flash and produce the effect of a thousand sparks kindling and dying out every minute upon the somber background of the picture.”
In other words, Berloiz seems to view movement of the choir as a larger shimmering crystal of human molecules.
Timeline video themed on the human molecule, themed on the 2008 song “Human” by The Killers (March 2009).
1869: Taine
French historian Hippolyte Taine, independent and contrary to prior metaphorical use of the term human molecules by Berloiz, was the first to use the term in a scientific sense and to build argument on this concept, and to have others adopt his usage. In 1869, in the preface to the book On Intelligence, Taine stated ‘it is now admitted that the laws which rule formation, nutrition, locomotion, for bird or reptile, are but one example and application of more general laws which rule the formation, nutrition, locomotion, of every animal.’ He continues ‘in the same way we begin to admit that the laws which rule the development of religious conceptions, literary creations, scientific discoveries, in a nation, are only an application and example of laws that rule this same development at every moment and with all men.’ In other terms, Taine states, ‘the historian studies psychology in its application, and the psychologist studies history in its general forms.’ On this logic, Taine reasons: [16]
"He first notes and follows the general transformations presented by a certain human molecule, or a certain peculiar group of human molecules; and, to explain these transformations, he writes the psychology of the molecule or its group."
In sum to the preface of his book, he states that ‘for the last fifteen years I have contributed to these special psychologies’. Moreover, ‘I now attempt a general psychology.’ He notes, however, that ‘to embrace this subject completely, this theory of the Intelligence (faculty of knowing) needs a theory of the will added to it.
Taine's human molecular philosophy had a significant influence on American historian Henry Adams, who became acquainted with Taine's philosophy as early as 1873. Adams' associate German physician Ernst Gryzanowski also seems to have adopted Taine's human molecule conception, using it in his 1875 article "Comtism", discussed below.
Léon WalrasVilfredo Pareto
Leon WalrasVilfredo Pareto
In circa 1872, in efforts to make a science out of economics, French economist Léon Walras began to develop a theory of economic equilibrium in which he consider people to be "economic molecules"; students of this school of thought include French-Italian mathematical engineer Vilfredo Pareto and Polish sociologist Léon Winiarski who each developed human molecular theories of their own, the latter using Rudolf Clausius as a basis.
1870-1903: Laussane school - Walras|Pareto|Winiarski
In 1870, French economist Léon Walras became professor of political economics (and later chair) at the University of Lausanne and together with his protégé French-Italian mathematical engineer Vilfredo Pareto and their followers, most notably Polish economist Leon Winiarski, this school of thought came to be known as the ‘Lausanne school’. [52] Walras considered people to be people as "economic molecules" and aimed to formulate a economic equilibrium theory based on mathematics and science.
On this logic, in the years to follow, Pareto
began to define a person explicitly as a ‘human molecule’ and to further outline a sociological theory based on human molecular interactions. In his in his 1896 Course on Political Economics, Pareto specifically defines a social system as follows:
“Society is a system of human molecules in a complex mutual relationship.”
In the context of the economic satisfaction, Pareto posits that human molecule only acts in response to the force of ophelimity:
"First we separate the study of ophelimity (economic satisfaction) from the diverse forms of utility, then we direct our attention to man himself; stripping him of a large number of his attributes, leaving out the passions, good or bad, reducing him to a kind of molecule that only acts in response to the forces of ophelimity."
The constant flow of human molecules (2006)
2006 photo by American photographer Pierre Rousseau entitled “The Constant Flow of Human Molecules”, with the subtitle: “in blind service to Kant's Categorical Imperative. The newest psycho-babble craze is to get happy by preservation of "good" behaviors in acceptance of and slavery to the machine.” [59]
This was outlined further in his 1916 Treatise on General Sociology, wherein his goal, as it has been argued, was to construct a system of sociology analogous in its essential features to the generalized chemical thermodynamics system as outlined in American mathematical physicist Willard Gibbs’ 1876 On the Equilibrium of Heterogeneous Substance. A residual protege of this school of logic was Polish sociologist Léon Winiarski who formulated the subject of "social mechanics", a course taught at the University of Geneva (1894-1900), based on the dynamics of Italian mathematician Joseph Lagrange and the thermodynamics of German physicist Rudolf Clausius. To cite an excerpt of Winiarski's 1898 book Essay on Social Mechanics: [51]
“It is axiomatic to say that the fundamental forces soliciting the individual in society are egoism and altruism. If we consider the individual as a molecule of the social aggregate, these two forces can be regarded as playing the same role that attraction and repulsion play in any material system of the universe.”
In another instance, Winiarski states:
“The society is therefore considered as an aggregate of individual molecules, each depending on the forces of desire, which together through their interaction tend the society towards maximum satisfaction.”
Winiarski was the leading thinker of the Lausanne school, particularly for his use of thermodynamic formulation.
Henry Carey
American economist Henry Carey, described as the 'Newton of social science' for his use of physics and chemistry in explaining the social phenomenon of reactions between people, the 'molecules of society'.
1874: Henry Carey
In 1874, by American economist Henry Carey outlined the subject of 'social science' as such: [61]
“Man, the molecule of society, is the subject of social science.”
Here, Carey alludes to the concept of the person as human molecule and in his volumes of work in sociology he eventually gained the label as being referred to has the ‘Newton of social science’ for his law of social gravitation, drawing extended commentary on the physics and chemistry of human molecules, from those as Austrian social economist Werner Stark. In introducing the topic of social heat between reactive human molecules in society, according to 1962 commentary by Stark, Carey states: [62]
“In the inorganic world, every act of combination is an act of motion. So it is in the social one. If it is true that there is but one system of laws for the government of all matter, then those which govern the movements of the various inorganic bodies should be the same with those by which is regulated the motion of society; and that such is the case can readily be shown.”
The terms 'organic world' (carbon-based) verses 'inorganic world' (non-carbon based), to note are a antiquated synonyms for the life (animate) verse non-life (inanimate) dichotomy; a play on the theory that living things are made of carbon. Next, in what seems to be a citation of Berthelot-Thomsen principle, that the heat of a reaction was the true measure of affinity, Carey states: “to motion there must be heat, and the greater the latter, the more rapid will be the former.” This quotation, according to Stark, means that:
“In the physical universe, heat is engendered by friction. Consequently the case must be the same in the social world. The ‘particles’ must rub together here, as they do there. The rubbing of the human molecules, which produces warmth, light and forward movement, is the interchange of goods, services, and ideas.”
All-in-all, this is a very cogent and modern presentation of explaining chemical affinity, reactions, between human molecules, heat, and work, in the context of the economics and sociology of the exchange of goods and services; albeit without he modern conceptions of entropy, activation energy, free energy coupling, etc.
1875: Gryzanowski
In his 1875 article "Comtism", German physician Ernst Gryzanowski argues: [33]
“Civil law, commerce, political economy, and international ethics are all based on the assumption that the social body consists of such human molecules, and there is no reason why the methods of physical science should not be applied to the statics and dynamics of that society, the passions and rights of the individual man corresponding exactly to the chemical and physical forces inherent in the material molecule.”
This quote captures Gryzannnowski's opinion of how the social physics of French sociologist Auguste Comte would ferret out in a modern sense. Gryzanowski seems to have adopted the Taine's 1869 'human molecule philosophy', likely by coming across it through his association with the North American Review (wherein an article on Taine's human molecule philosophy was published two years prior) and through discussion with his friend Henry Adams, who also had adopted Taine's philosophy as his own.
Henry Adams (young)
American historian Henry Adams adopted Taine's 1869 'human molecular' philosophy; defining human chemistry as the study of the 'mutual attraction of equivalent human molecules' (1885) and also wrote two books using the human molecule perspective: one on Willard Gibbs' phase rule applied to the phases of humanity (1909) and another on the application of William Thomson's degradation version of the second law applied to collective sets of evolving human molecules, viewed historically (1910).
1885: Adams' human molecules
As early as 1873, American historian Henry Adams had come across Taine's 1869 'human molecule philosophy', when as the editor of The North American Review he accepted the article “Taine’s Philosophy”, by James Bixby, for publication, wherein Taine’s philosophy of history is presented as applied psychology of human molecules. American biographer Ernest Samuels argues that Adams was significantly influenced by Taine’s suggestion that the object of the historian is to study and follow the transformations of human molecules and to write history as the psychology of human molecules and that Adams later adopted this view as his own. [27] To exemplify the influence of Taine on Adams, on 12 April 1885, while at extended stay at work in Washington, Adams wrote to his wife: [28]
“I am not prepared to deny or assert any proposition which concerns myself; but certainly this solitary struggle with platitudinous atoms, called men and women by courtesy, leads me to wish for my wife again. How did I ever hit on the only women in the world who fits my cravings and never sounds hollow anywhere? Social chemistry—the mutual attraction of equivalent human molecules—is a science yet to be created, for the fact is my daily study and only satisfaction in life.”
This logic is clearly seen in Adams’ 1910 A Letter to American Teachers of History, wherein Adams argues that the history must be viewed as transformations of groups of human molecules subject to the second law of thermodynamics. The book presents an bivalent discussion on paradoxical relationship between Lord Kelvin's 1852 take on the second law as a universal tendency towards the dissipation of energy and Charles Darwin's 1859 take on evolution as a universal tendency towards the elevation of mental energy. Specifically, Adams reasoned that "the laws of thermodynamics must embrace human history in its past as well as in its early phase" and that from the point of view of a physicist, to explain the fall of potential, as embodied in the first and second law of thermodynamics, in relation to "Darwin's law of elevation", he must:
"The historian will begin with his favorite figure of gaseous nebula, and may offer to treat primitive humanity as a volume of human molecules of unequal intensities, tending to dissipate energy, and to correct the loss by concentrating mankind into a single, dense like sun."
History, then, according to Adams, "would then become a record of successive phases of contraction, divided by periods of explosion, tending always towards an ultimate equilibrium in the form of a volume of human molecules of equal intensity, without coordination." In human chemistry terms, Adams was attempting to reconcile the second law, i.e. that all natural systems are irreversible and tend to dissipate energy in their work cycles, by postulating that human systems compensate or create new energy by the act of contraction of people in the formation of cities and and world powers, similar to how the sun continuously releases energy by the gravitational contraction of mass. In modern terms, Adams' human molecule social contraction theory can be interpreted through the release of energy in the formation of new human chemical bonds in coupled coordination with the dissolution of bonds old.
Human Molecule (1988) Norval Morrisseau
1988 acrylic on canvas (27.5”x51.5”) painting, entitled “Human Molecule”, by Canadian aboriginal artist Norval Morrisseau, which seems to give the impression, possibly evolutionarily, that a human is a molecule, being part fish, part bird. [58]
1894: Leclere
In 1894, French education theorist Max Leclerc was commenting on Taine’s 1875 text Growing Disagreement of School and Life (La Disconvenance Croissante de l’ecole et de la Vie), according to one review, that “in our Lycees there is the same military discipline (as under Napoleon), the same aggregation of numbered human molecules, which the huge wheel, turned throughout France by the Minister’s pedal, grinds and reduces to human powder.” [35] This view comes from Leclerc’s 1894 book Education in the Middle Classes in England, where in he discusses the views of Taine, and comments that: [36]
“France has repeatedly changed its political constitution in this century but, through all vicissitudes, under many different governments, the regime founded by Napoleon Bonaparte persisted as the mode of education has remained the same. Twenty years ago, France sought to establish freedom with the Republic, she believes she has succeeded, and freedom, she says possesses ". How does it prepare new generations to use and Others How those born since in 1870 they are learning about freedom? If the parliamentary monarchy of July had not had the courage, if the Republic of 1848 has not had time, if the Second Empire could not have the will repudiate the dangerous legacy of Napoleon, the Third Republic, who has time and should have the courage and determination, she undertook what no one has been able, willing or dared to do before it? Did she understand how risk it runs, raising his free citizens by whatever means were combined to perpetuate the reign of the despotic one? Prefects and principals of the Republic today have no other conception of their role than once under the sword of Napoleon. In our schools, even military discipline, even numbered piles of human molecules that huge wheel turning in all of France under the pedal stroke of the Minister, crushed and pulped to humanity ' .”
1898: Ramsay
In 1898, Scottish chemist William Ramsay used the ‘human molecule’ analogy in his discussion of German physicist Rudolf Clausius’ 1856 kinetic theory of gases, wherein he compared the body of gases to a football team of human molecules: [29]
“I find, in my own case, that it helps greatly to a clear understanding of a concept if a mental picture can be called up which will illustrate the concept, if even imperfectly. Some such picture may be formed by thinking of the motions of the players in a game of football. At some critical point in the game, the players are running, some this way, some that; one has picked up the ball and is running with it, followed by two or three others; while players from the opposite side are slanting towards him, intent upon a collision. The backs are at rest, perhaps; but, on the approach of the ball to the goal, they quicken into activity, and the throng of human molecules is turned and pursues an opposite course. The failure of this analogy to represent what is believed to occur in a gas is that the players’ motion is directed and has purpose; that they do not move in straight lines, but in any curves which may suit their purpose; and that they do not, as two billiard-balls do, communicate their rates of motion to the other by collision. But, making such reservations, some idea may be gained of the encounters of molecules by the encounters in a football-field.”
We Human Chemicals (1948)
American writer Thomas Dreier's 1948 book We Human Chemicals: the Knack of Getting Along with Everybody, written with consultation from Harvard chemist Gustavus Esselen, in which principles of chemistry are applied to facets of human interactions on the view that each person is a 'human chemical' constructed from elements of the periodic table. [49]
1910: Deier’s human chemicals
With the construction of the periodic table in 1869, by Russian chemist Dmitri Mendeleyev, in which the then 66 known (and hypothesized) elements were listed in by their atomic weights, in rows such that their properties repeated in a series of periodic intervals, some began to think of a human, invariably composed of these elements, by the name ‘human element’, ‘human chemical’, or ‘human chemical element’. One such person was American writer Thomas Dreier who in 1910 published a 27-page pamphlet entitled “Human Chemicals”, extolling on the view that each person is a ‘human chemical’, and that one might better come to understand human interactions if this view is used when considering the variants of human behavior, such as explosive behavior [48] The following is an aggregate quote summarizing Dreier’s view on the matter from his 1948 book We Human Chemicals, an expanded version of his earlier article, written with consultation from Harvard chemist Gustavus Esselen: [49]
“In the world of science, the chemist works with 96 elements, 92 of the period table, plus four recently discovered. These elements can be combined to make anything and everything of a material nature. So it is with people. All of us are human chemicals. Some of human chemicals can be mixed only with great difficulty; some explode if brought together; some excite each other beneficially; others are inert; others mix to form potent combinations; still others act as potent chemical catalysts, bringing about desirable changes in others when mixed with them, without themselves being changed.”
The cover of the 1948 book is pictured adjacent, where each ‘human chemical’ is shown on a sort of mock human periodic table; which, to note, is similar to Goethe’s human affinity table (1808), although the latter is more accurate in a chemical thermodynamic sense.
1911: Perris
In the 1911 book A Short History of War and Peace, English journalist George Perris, according to a 1913 review by American writer Alpheus Snow, argues that: [39]
“War to bring about peace seems paradoxical. Yet it seems undoubtedly to be true, as the Perris says, that war is often a process of evolution—an explosive process which occurs when the progressive movement of human molecules towards a reorganization making for equality of opportunity and a betterment of the law, is unduly held back by the forces of standpatism and vested interests.”
This, however, seems to be an interpretation of Perris’ opening chapter ‘The Human Swarm’ wherein he states that: “modern thought points to nothing so certainly as the universality of change. We stand on a whirling ball, every atom and molecule of which is in perpetual movement. Individually, we are aware of being different men and women every day of our lives; the life of the world has undergone such a transformation even during our own generation that an unmoved character-basis of society is incomprehensible, a miracle in a realm of law—and what an evil miracle.”
William Armstrong Fairburn (1876-1947)
English-born American navel engineer William Fairburn not only viewed people as 'human chemicals' but also wrote the first book on human chemistry (1914).
1914: Fairburn's human chemical elements
In 1914, American navel engineer and industrial chemistry executive William Fairburn wrote Human Chemistry, the first attempt at a book on the subject of human chemistry, with aims to help the foreman and executive better understand his or her job, being that of facilitating the various reactions between people, considered as human chemical elements, in their daily work in the factory. The following is an aggregate opening quote summarizing the view followed by Fairburn in his booklet:
“All men are like chemical elements in a well-stocked laboratory, and the manager, foreman, or handler of men, in his daily work, may be considered as the chemist [whose] primary requirement [or] principle work is the analysis and synthesis of the reactions resulting from combinations of individuals.”
Fairburn goes on to state that there were 81 known chemical elements, each possessing different characteristics, and that similarly so to is each human chemical element different from his fellows in temperament and qualifications. Fairburn, to note, uses the terms ‘human chemical’ and human chemical element’ interchangeably in his book, speculating on topics such as how entropy applies to reactive human chemicals.
Pierre Teilhard (young)
French philosopher Pierre Teilhard wrote extensively on the use of atomic reductionism, defining people as human molecules, in attempts to reinterpret religion, evolution, and spirituality in modern scientific language, with focus on the evolution of the mind and the social collective in view of the future as described in his omega point theory.
1916: Teilhard's human molecules
One who wrote extensively, in a very dense conceptual style, of considering people as molecules having evolved over time from atoms, was French philosopher Pierre Teilhard. He began working on his theory, through various unpublished essays, in 1916, until is death in 1955, after which his voluminous works were published post-humorously. The following is a representative quote from 1947, alluding to his concept of the noosphere or global sphere of connected minds:
“The scope of each human molecule, in terms of movement, information, and influence, is becoming rapidly coextensive with the whole surface of the earth.”
The following quote gives a precursory outline of the very dense subject of the human chemical bond: [3]
“If the power of attraction between simple atoms is so great, what may we expect if similar bonds are contracted between human molecules?”
I sum, during the years 1916 to 1955, Teilhard outlines a theory of evolution from atom to man, the latter of which he considers as a complex molecule or "human molecule" a term that he uses throughout his writings.
In his articles on Human Energy, a collection of essays on morality and love, written between 1931 and 1939, for instance, he conceives of man as a “human molecule” (1936). Similarly, in his follow-up essay "Activation Energy", he theorizes that the concept of human reaction activation energy, i.e. the barrier to transition, applies to human interactions.
n his 1947 essay “The Formation of the Noosphere”, he outlines the global view that due to the growing interconnectiveness of human molecules, they are forming a layer of the mind "noo-" over the biosphere. In particular, he states “no one can deny that a network (a world network) of economic and psychic affiliations is being woven at ever increasing speed which envelops and constantly penetrates more deeply within each one of us. With every day that passes it becomes a little more impossible for us to act or think other wise than collectively.” In other words, according to Teilhard, human molecules are forming a connective sheath or skin around the globe of the earth.
1919: Patten
In his 1919 address “The Message of the Biologist”, American zoologist William Patten attempted to outline how the modern person might go about deriving a science-based system of morality and future governing constitution for a ‘molecular society’, of people considered as ‘human social atoms’ (social atoms) or ‘human molecules’, based on the pure science teachings of chemistry, physics, and astronomy. [30]
George Carey (s)
American physician George Carey, first to state that a human being is actually a chemical formula (1919).
1919: George Carey | chemical formula in operation
A significant turning-point thinker in the history of the human molecule concept was American physician George Carey who, in his 1919 book The Chemistry of Human Life, made an attempt to integrate biochemistry with chemical affinity logic along with knowledge of active elements into a synthesis of a chemistry of the human being. Although his work is detracted a bit by religious and other mineral elixir types of healing remedies, he does outline a few gems. In a truncated opening quote, for instance, Carey points to the idea that: [5]
“The human organism is an intelligent entity that works under the guidance which man has designated as chemical affinity.”
Technically, to note, the full quote of what Carey said is that it is the 'mineral salts' of the human organism are 'intelligent entities' that work 'under divine guidance', which man has designated as chemical affinity. The above reworded quote, however, is the correct modern statement. Carey then goes on to state that the human body is a storage battery that must be supplied with the proper elements (chemicals) to set up motion at a rate that will produce what we please to call a live body. In commentary on how the laws of chemistry apply universally, he states: “there can be but one law of chemical operation in vegetable or animal organisms. When man understands and cooperates with that life chemistry, he will have solved the problem of physical existence.” The most interesting point in his book is his statement that:
“Man’s body is a chemical formula in operation”
It would be eighty-one years before Sterner and Elser would actually make an attempt at calculating this formula. And the addendum ‘in operation’ is a huge topic not yet even begun to be simplified, e.g. as exemplified the fact that human molecules have 46 percent annual atomic turnover rate, whereas other molecules, such as H20, do not seem to have such a turnover rate.
Poll: Are you a giant molecule? (2008)
2008 poll "Are You a Giant Molecule" conducted online by English physicist Jim Eadon (graph from Thims' 2008 The Human Molecule), which shows that about 57% of Internet users think they are a molecule. [17]
1942: Schumpeter
In 1942, Austrian economist Joseph Schumpeter speculated on how certain human molecule move up and down in social class over time: [34]
“It can be shown that in all cases, that human molecules rise and fall within the class into which they are born, in a manner which fits the hypothesis that they do so because of their relative aptitudes; and it can also be shown, second, that they rise and fall across the boundary lines of their class in the same manner. This rise and fall into higher and lower classes as a rule takes more than one generation. These molecules are therefore families rather than individuals. And this explains why observers who focus attention on individuals so frequently fail to find any relation between ability and class position.”
In this quote, Schumpeter seems to be digging around in a number of issues: one that human molecules are coupled to each other, especially in family lineages; to that a group of human molecules can also be termed or considered a new type of larger aggregate human molecule, e.g. such as the trihumanide molecule (three human molecules bonded in a unit); among other factors.
Sociogram of a social atom (Moreno)
social atom diagram (key)
1949 diagram of ‘the social atom’, in the Jacob Moreno-scheme, of an interviewed female (#3) by American sociologist Leslie Zeleny. [56]
1940s: Moreno | social atom theory (psychology)
In 1917, Romanian-born American psychologist Jacob Moreno began to develop a modified Freudian psychology, focused on spontaneity states in the dynamics of human movement, and by circa 1951 had begun to quantify his theory using constructs of social entropy, employing a sort of three dimensional social atom theory, explained in terms of tele relationships or distal and proximal bonds to other social atoms. The ways he uses the term social atom, along with other varieties, such as cultural atom or acquaintance atom, etc., is a bit ambiguous: [47]
Social atom, operational definition: plot all the individuals a person chooses and those who choose him or her, all the individuals a person rejects and those who reject he or she; all the individuals who do not reciprocate either choices or rejections. This is the ‘raw’ material of a person’s social atom.”
An interesting facet of Moreno's approach is his attempt at applying the Bohr model of the atom, in which quantum energy inputs to the orbitals of atoms cause shifting of electrons, up or down, in orbital structure, to changes or shifts in human relationships, e.g. as in a infant child's orbit about his mother.
A 1949 attempt at an illustration of Moreno’s social atom theory was undertaking by American sociologist Leslie Zeleny, who diagrammed the sociometric findings of the attraction-repulsion aspects of the relationships surrounding various people. A diagram of the social atom for female number three is shown adjacent. [56] The fact that the diagram is titled sociogram of 'the social atom of person number three', verses sociogram of a social atom', highlights that Moreno's theory is a more exotic extrapolation of the basic atomic model applied to human relationships, not easy to describe in a single definition.
According to Zeleny, her study of the social atom of #3, shows #3 to be a much desired associate by her classmates; but that she is very ‘choosy’ about those with whom she will ‘pal’ as shown by the various dynamics of attraction and rejection, reciprocated or unreciprocated.
Charles Galton Darwin (young)
English physicist C.G. Darwin (grandson of Charles Darwin) defined the science of 'human thermodynamics' as the study of systems of 'human molecules.' (1952)
1952: Darwin | thermodynamics of human molecules
The conception of a set of people as a collection of "human molecules", who interact according to the laws of physics, particularly statistical thermodynamics, whose history and future was determined by the laws of thermodynamics was first stated by American physicist Charles Galton Darwin, the great grandson of Charles Darwin, in his 1952 book The Next Million years. [4] The following is an excerpt of the 1953 review by Time magazine: [68]
“In Darwin's view, the human molecules have one fundamental property that dominates all others: they tend to increase their numbers up to the absolute limit of their food supply.”
In his opening chapter, C.G. Darwin sets out to view society on the ideal gas model
“We may, so to speak, reasonably hope to find the Boyle’s law which controls the behavior of those very complicated molecules, the members of the human race, and from this we should be able to predict something of man’s future. When I compare human beings to molecules, the reader may feel that this is a bad analogy, because unlike a molecule, a man has a free will, which makes his actions unpredictable.
This is far less important than might appear at first sight, as is witnessed by the very high degree of regularity that is shown by such things as census returns. Thus, though the individual collisions of human molecules may be a little less predictable than those of gas molecules, census returns show that for a larger population the results average out with great accuracy. The internal principle [internal energy] then of the human molecules is human nature itself.”
In this book, Darwin goes on to define the new future science of 'human thermodynamics' as the thermodynamic study of systems of 'human molecules', a significant turning point in human thought.
Human molecule (Ecological Stoichiometry) 1
Human molecule (Ecological Stoichiometry) 2
Human molecule (ecological stoichiometry ) 3 (f)
Human molecule (Ecological Stoichiometry) 4
Human molecule (Ecological Stoichiometry) 5
Pages three through five from the 2002 book Ecological Stoichiometry, by American limnologists Robert Sterner and James Elser, showing the first published calculation of the molecular formula of a human being. [11]
1970: Nisbet | social molecules
Influenced by the work of Henry Adams and Brooks Adams, in his 1970 book The Social Bond, American sociologist Robert Nisbet considers people to be ‘elementary human particles’, refers to the adhesion between two human particles as a ‘social bond’, and the attachment of two or more human particles to be a ‘social molecule’. Below is a representative quote:
“Just as modern chemistry concerns itself with what it calls the chemical bond, seeking the forces that make atoms stick together as molecules, so does sociology investigate the forces that enable biologically derived human beings to stick together in the ‘social molecules’ in which we actually find them from the moment, quite literally, of their conception.”
Beyond this, Nisbet spends considerable time discussing his conception of ‘social entropy’ and how this relates to human bonding. [45]
1998: Müller | human molecules (lecture)
In the 1998 article "Human Societies: A Curious Application of Thermodynamics", Venezuelan chemical engineer Erich Müller defined humans to be analogous to molecules (human molecules), then quantified inter human molecular love and hate in terms of basic thermodynamic pair bonds, and quantified social forces as a type of van der Waals dispersion force. [9] In 2006, Müller was interviewed by journalist Laura Gallagher, with Reporter magazine, for his popularity for his invigorating thermodynamic lectures in which he draws analogies between molecules and people. [10]
1998: Goleman
In 1998, American emotional intelligence theorist Daniel Goleman commented his view that: [37]
“Virtually everyone who has a superior is part of at least one vertical ‘couple’; every boss forms such a bond with each subordinate. Such vertical couples are a basic unit of organizational life, something akin to human molecules that interact to form the lattice work of relationship that is the organization.”
Here, Goleman seems to be making an attempt to discuss aspects of human chemical bonding.
1999: Prugh and Costanza
In 1999, American ecological economists Thomas Prugh and Robert Costanza state that: [32]
“The welfare of human society is best served by the view of people as ‘human molecules’ who, by pursuing their own interests through the market, inevitably promote the general good.”
Robert SternerJames Elser (s)
American limnologists Robert Sterner and James Elser: calculated a 22-element molecular formula for one average human being, based on actual mass composition measurements, in April 2000, as found in their 2002 textbook Ecological Stoichiometry.
2000: Sterner and Elser | empirical molecular formula
See main: Human molecular formula
The first calculation of the empirical molecular formula for the human being was made in April of 2000 by American limnologists Robert Sterner and James Elser. [15] Sterner and Elser published there results in the 2002 book Ecological Stoichiometry: the Biology of Elements from Molecules to the Biosphere. In outlining their subject, Sterner and Elser state:
“The stoichiometric approach considers whole organisms as if they were single abstract molecules.”
They were led to this by studying differences in carbon, nitrogen, and phosphorous levels in similar species. In their chapter one, as to the human being, they state that “from the information on the quantities of individual elements, we can calculate the stoichiometric formula for a living human being to be”, taking cobalt (Co) as unity, the state that the following "formula combines all compounds in a human being into a single abstract ‘molecule’”: [11]
H375,000,000 O132,000,000 C85,700,000 N6,430,000 Ca1,500,000 P1,020,000 S206,000 Na183,000 K177,000
This amounts to a 22-element human empirical molecular formula. They continue, “our main purpose in introducing this formula for the ‘human molecule’ is to stimulate you to begin to think about how every human being represents the coming together of atoms in proportions that are, if not constant, at least bounded and obeying some rules”.
2001: Peachey | marriage and human molecules
In the 2001 book Leaving and Clinging, American author Paul Peachey devotes his first chapter, entitled “The Marital Bond as the Human Molecule”, to the development of the view that each person can be considered as an atom and that attachments of human atoms, in families and marriage, constitute a human molecule. To cite one example quote: [42]
Libb Thims (2)
In September 2002, American chemical engineer Libb Thims, independently, calculated a 26-element molecular formula for an average human being, based on actual mass composition measurements, as found in his 2007 textbook Human Chemistry.
“The question is whether the symbiosis of these polarities, i.e. the molecular (family) versus the atomic (individual) dimension of human existence, is a given in nature, or whether as humans we can replace this way of creating and sustaining the basis human molecule.”
Peachey, to note, seems to cull many of his ideas on human bonding and human molecules from the prior work of American sociologist Robert Nisbet.
2002: Thims | molecular formula (death)
In 1995, American chemical engineer Libb Thims began to study the spontaneity criterion (ΔG < 0) , i.e. that a reaction (human reaction or chemical reaction) needs to show a negative change in the Gibbs free energy if it is to be spontaneous (energetically feasible or successful), in relation to the basic human reproduction reaction, in which a man M and women W conceive a new baby B:
M + W → B
In circa 2002, Thims began to mediate on the issue of what exactly are these entities, M, W, B, from a chemical, atomic, or fundamental particle point of view, that he had been aiming to quantify enthalpically and entropically for the last seven years. In September of 2002, independent of Sterner and Elser, Thims calculated a 26-element molecular formula for the average human being. [12]
Molecular evolution table (300px)
A molecular evolution table showing key structures in the synthesis of human beings (human molecules) over the last 13.7-billion years.
Thims included his calculation results in the 2002 manuscript Human Thermodynamics (Volume One), in the 2005 IoHT Molecular Evolution Table (online), and in the 2007 book Human Chemistry (Volume One). [12] Thims states, on page-190, of the 2002 manuscript, as based on a mass percent table of the 26-elements found to have function in the human body, that at approximately 200,000 years ago, "the universe had expanded/reacted enough to form a molecule made of these specific elements that we now define as homo sapien" as can be represented by the following "crude empirical formula for the molecular human", taking vanadium (V) as unity: [13]
H2.5E9 O9.7E8 C4.9E8 N4.7E7 P9.0E6 Ca8.9E6 K2.0E6 Na1.9E6 S1.6E6 Cl1.3E6 Mg3.0E5 Fe5.5E4
F5.4E4 Zn1.2E4 Si9.1E3 Cu1.2E3 B7.1E2 Cr98 Mn93 Ni87 Se65 Sn64 I60 Mo19 Co17 V
This amounts to be a 26-element human empirical molecular formula. Thims concludes "by describing the existence of a human being in this form we are by no means making attempts to degrade our existence, we are only trying to help elucidate our understanding of this existence."
The need or drive for Thims to calculate the molecular formula originated in a short chapter in newly forming manuscript (2001-2004) Human Thermodynamics, called "What Happens to a Person When They Die" (a precursor to science of cessation thermodynamics), to define exactly, from a fundamental particle point of view, what exactly is a "human being". In other words, what fundamental particles constitute the totality of a person at the moment of death, in both bodily structure form and bond structure form, i.e. if these quantities are to be conserved according to the law of energy-matter conservation? Subsequently, from a chemical point of view or first law of thermodynamics point of view, the composition of a person technically is a twenty-six-element molecule combined with its substrate materials (personal wealth) and consortium of interpersonal human chemical bonds. In the years to follow, using more accurate mass composition tables, refinements on this formula were made by Thims.
2002: Hodgson | Little Fun Book of Molecules Humans (book)
In 2002, similar in theme to Greek philosopher Empedocles' human chemistry analogies of how people how like each other mix like water and wine,
American writer John Hodgson published the 102-page The Little Fun Book of Molecules Humans, a short booklet containing ninety-eight chemistry aphorism style sayings intended to look at the similarities existing between humans and molecules, so to, as stated in his preface, unearth clues to scientific information that might lead to new research. The second 2010 edition of Hodgson's book, retitled as molecles humans, seems to have taken its cover-design from American chemical engineer Libb Thims' 2008 book The Human Molecule (a 120-page historical overview of the concept of the human molecule); which in turn took its cue from Italian polymath Leonardo da Vinci's 1487 theory of the Vitruvian man, a geometrical model of a human, where each person considered as a tiny micro-universe; which in turn took its cues from Roman architect Marcus Vitruvius and his hints at correlations of the ideal human geometric proportions, found in his 25BC book On Architecture.
Vitruvian man (s)Little Fun Book of Molecules Humans (2002)
The Human Molecle (300px)
molecules humans (2010)
Original circa 1487 drawing of the Vitruvian man by Italian polymath Leonardo Da Vinci (CB IQ=200), a man depicted as a theoretical geometric figure, representative of a tiny universe, analogous to the structure of the surrounding universe. 2002 first edition of American writer John Hodgson's chemical aphorism style book on the similarities between humans and molecules. [41]2008 depiction of Da Vinci's Vitruvian man defined as a 26-element molecule, shown on the cover of the 122-page book The Human Molecule by American chemical engineer Libb Thims. [2]
2010 depiction of Da Vanci's Vitruvian man, representative of a human defined similar, analogous, aphorismic to a molecule, shown on the cover of the second edition American writer John Hodgson's book re-titled as molecules humans. [63]
Each page of Hodgson's book contains a different aphorism of which below are shown a few representative examples: [41]
“Different molecules or humans behave differently having different reactions or behaviors to changing situations.”
“When molecules or humans mesh they have chemical or physical reaction and or reproduction.”
Human molecule (2005) online
Online publication (2005) of the formula for one human molecule (with rotating break-dancer and caffeine stick-figure representations) by American chemical engineer Libb Thims. [66]
“With experiment we can better understand these molecules or humans like we never knew before.”
“Molecules and humans take in elements or food.”
“Molecules and humans engage in different behaviors and or sex.”
“Molecules and humans make or change common bonds.”
All-in-all, Hodgson’s book contains 98 of these sayings; although most, to note, are rather incoherent and negligibly connected to actual human chemistry.
Human molecular orbitals | transition state theory
In 1923, French physicist Louis de Broglie conceived the wave-particle duality theory of matter, which states that all bodies in the universe have both a wave and a particle nature. [64] This subject is bound up in the famous and puzzling double slits experiment invented by Thomas Young in 1801. Molecules as large as 60-atom Bucky Balls have been shown to exhibit wave-particle duality. It is probable that human molecules also have not only a particle-like movement behavior, but also a wave-like behavior. This was first noted in Ernst Mach’s 1885 conception of “turning tendencies”. A modern version this logic is human molecular orbital theory.
The the study of the extrapolation standard molecular orbital theory to the time-accelerated analysis of the dynamic structure, formation, and dissolution of chemical bonds between human molecules, invokes theoretical conception of 'human molecular orbitals', as defined by human molecular orbital theory. When human movement, over the surface of the earth, is viewed at a time-accelerated pace, such as viewing the total weekly, monthly, or yearly movements of one person, via for example GPS tracking, in a sped-up five minute video clip, one begins to see an orbital picture of human movement. Tracking of humans, considered as material points, over extended high-speed clips of spans of months or years views, mathematically for patterns in short few-minute video segment trajectory clips, leads to a view of human activity orbitals changing dynamically over time, as bonds are formed and broken. Shown below is a typical generic picture of the transition state model of the male-female reaction, where two human molecules, male Mx and female Fy, collide in time, and begin to interact in their common "school orbital" S, of their various possible daily orbitals of work W, gym G, and so on; where after, by day 90, the two are orbiting in mutual friends houses F1 and F2; where after, by day 365, owing to the energy stabilizing effect of the ongoing reaction, the pair marry, thus combining orbitals, into the formation of one nuclear family with a single joint home H acting as the centralized nucleus of the dihumanide molecule. This is depicted below:
Transistion state (HMO1)Transistion state (HMO2)Transistion state (HMO3)
Day one: Two people, i.e. human molecules, Mx and Fy, meet in their school orbitals, and begin to associate.
Day 90: The two human molecules develop more orbital overlap (stability) by hanging out at the houses of mutual friends.
Day 365: The two human molecules fuse, by combining their previous separate nuclei into one (they move in together).
A molecular orbital, by technical definition, is the a solution of the Schrödinger equation that describes the ninety percent probable location of an electron relative to the nuclei in a molecule and so indicates the nature of any bond in which the electron is involved. In simple terms, it is understood that electrons (and molecules) act as both waves and particles, tending to have orbital motions in their trajectories.
That's life (2005)
Opening section from the 2005 New Scientist article "That's Life", in which a 12-element empirical formula for a human is given. [70]
Starting with the conservation of energy, which assumes that the total energy of a system is equal to the sum of its potential energy and kinetic energy, a descriptive time-dependent 'wave equation' can be derived which describes the movement or behavior, and thus the structure, of the nuclei and electrons that comprise an atom or molecule. This description is particularly intuitive when electrons are shared between two different atoms or molecules, creating a chemical bond, which actuates as through the means of an orbital overlap effect. The translation of this logic to the bonding transition states of human interpersonal interactions provides for a robust means of understanding human chemical bonding.
2005: New Scientist
In a 2005 article entitled "That's Life", New Scientist magazine gave the following 12-element empirical molecular formula for a human:
which the define as one's "chemical formula". [70]
2005: Molecular evolution tables
See main: Molecular evolution table
During the writing of the manuscripts for Human Thermodynamics (Volumes 1-3), Thims began to make an evolution table putting the hydrogen atom at the top row and the human molecule at the bottom row, filling in known intermediates in the middle rows (monkey, shrew, fish, bacteria, etc.), and calculating approximate molecular formulas for each each intermediate structure.
This was first posted online in 2005 (IoHT Molecular Evolution Table). These tables, later published in various locations, have become a focal point of discussion and debate for many scientists in this field. A section of the latest version is shown below:
ET (1)ET (2)ET (3)ET (4)ET (6)ET (5)ET (7)ET (8)
13.7 BYASeconds after Bang13.2 BYA4.4 BYA4.1 BYA3.9 BYA45 MYA150,000 Year Ago
In other words, it is a matter of filling in the blanks, so to speak, to connect the mechanism of synthesis of the human molecule, starting with hydrogen.
The Social Atom (250px)
American physicist Mark Buchanan's 2007 book The Social Atom, in which he applies physics principles to the modeling of people in mass as collectives of social atoms. [43]
2007: Buchanan | The Social Atom (book)
In 2007, American physicist Mark Buchanan wrote the book The Social Atom, in which he attempts to model each human actor as an individual atom in the crowd of the masses. In addressing the matter as to how to view people atomically, Buchanan remains two-sided as to whether to use the particle (atom) view or the human molecular view:
“We should think of people as if they were atoms or molecules following fairly simple rules and try to learn the patterns to which those rules lead.”
The platform of the book is American economist Thomas Schelling’s 1971 paper “Dynamic Models of Segregation”, the very same paper which seems to have originated the now-famous ‘tipping point theory’, which concluded to the effect that even if every trace of racism were to vanish tomorrow, something akin to a law of physics might still make the races separate, much like oil and water. [44] This view, to note, is similar to American law professor Richard Delgado’s 1990 law of racial thermodynamics. The subject is discussed further by others in integration and segregation thermodynamics. Some of Buchanan's conclusions however are rather incoherent, particularly in his effort to salvage the theory of free will:
“The laws of physics are beginning to provide a new picture of the human atom or [rather] social atom—and this is a picture that does not conflict with the existence of individual free will. Just as atomic-level chaos gives rise to the clockwork precision of thermodynamics, so can free individuals come together into predictable patterns.”
This is similar to English physicist C.G. Darwin's 1952 comment that human molecules have free will owing to their unpredictability, which of course is incorrect, just as is Buchanan's view.
2008: Thims | The Human Molecule (book)
The first book on the subject of the "human molecule", focused on its significance and history, was the 2008 booklet The Human Molecule, 120-pages in length, by American chemical engineer Libb Thims, as previously pictured above, which steps through the views of the three dozen or so individuals to have used this concept in discussion or philosophy. [20] The following chemistry aphorism style quote from the 1999 novel Milton's Progress by Forbes Allan, for instance, is the opening quote to The Human Molecule: [25]
“People are like particles, they behave in groups as if they were molecules in a test-tube.”
Thims' The Human Molecule can be read at (linked below).
Human chemical reactions
See also: Human chemical reaction (history)
The dynamic evolving attachment of human molecules together into structures, e.g. A≡B, such as marriage pairs, friendships, family units, etc., actuates according to the function of human chemical bonds. The rearrangement of bonds, the formation of new bonds, or the dissolution of old bonds, defines the process of a human chemical reaction, such as shown below:
A + B AB (combination reaction)
AB A + B (dissolution reaction)
Formula human (2011)
Indian-born American mechanical engineer Kalyan Annamalai and American mechanical engineer Carlos Silva’s 2011 engineering thermodynamics textbook definition of a human formulaically as a “26-element energy/heat driven dynamic atomic structure”, based on the work of American electrochemical engineer Libb Thims (2002). [71]
Human thermodynamics | Engineering thermodynamics
Thinkers including Henry Adams (1890s) and C.G. Darwin (1952) were the first to initiate the study of humans viewed conceptually as “molecules” in the context of thermodynamics—the latter specifically defining the science of human thermodynamics as the study of systems of human molecules.
In human thermodynamics, a set of human molecules partitioned off by an "energetic boundary", i.e. a quantitative spatial demarcation, such as a town boarder, a social barrier, state lines, corporate boundaries, occupational orbitals, social circles, family boundaries, etc., comprise a closed thermodynamic system of working molecules, i.e. a working body in the words of Clausius, according to which first and second law energy balances apply in the production of system external work W due to the action of cyclical solar heat input Qin.
In 2011, Indian-born American mechanical engineer Kalyan Annamalai and American mechanical engineer Carlos Silva, in their second edition Advanced Thermodynamics Engineering, citing Libb Thims (2002), in their “formula” section, give the following thermodynamic definition of a human: [71]
“Humans may be called a 26-element energy/heat driven dynamic atomic structure.”
Annamalia and Silva, of note, are the authors of the 2009 “Entropy Generation and Human Aging” on aging theory (or anti-aging) and thermodynamics. [72]
Recent views
In 2006, after being introduced to the human molecule concept the previous year, Russian physical chemist Georgi Gladyshev began to incorporate the human molecule perspective into his theories, and comments that “the conclusions of hierarchical thermodynamics correspond excellently to conception of Libb Thims about the thermodynamics of human molecules”. [57] As of 2010, Gladyshev believes that the aging process of molecules can be explained using his theory of hierarchical thermodynamics. In 2007, as mentioned previously, Russian bioelectrochemist Octavian Ksenzhek stated that:
"The economy of mankind is a very large and extremely complicated system [and] people are the 'molecules' of which it consists."
Ksenzhek goes on to use energy and entropy to study the ways in which the "various associations of people constitute its structural components." [23] In 2010, Martin Gardiner, of the Annals of Improbable Research, the group that administers the Ig Nobel Prizes aiming to spotlight research that makes people laugh and then think, ran a four-part, three-day article on the work of Libb Thims, entitled “I Am Not A Molecule”, subtitled 'Inside the IoHT', discussing topics such as Thims' 2008 book The Human Molecule, the Human Chemistry 101 video lectures on the human molecule, the Institute of Human Thermodynamics, the Journal of Human Thermodynamics, among other topics. Gardiner considers the subject of the chemistry and thermodynamics of human molecules to be an emergent intellectual development.
A oft-quoted popular quote for the 2010 book Employees First, Customers Second by Indian engineer business executive Vineet Nayar, signifying the logic that the employees are the molecular components of the mega-molecular structure of the corporation (corporate molecule), and that if each employee is instilled or given the vision of an entrepreneurial attitude that corporation will accelerate with a higher energy quotient. [60] :
Nayer spends considerable time discussing ideas on how hidden or latent energy exists in the employees of corporations.
Octillion | atoms in one person
See main: Number of atoms in
To give an idea as to the magnitude of the number of atoms in one human molecule as compared to, for instance, the number of atoms in one water molecule (three) or the number of atoms comprising the earth (sexdecillion), the following table lists names to common larger numbers. [53] The third column, Exp, shows the old-fashioned calculator shorthand symbol for large numbers, in which E is short for exponent, in the sense that, for instance, E9 is short for 10E9 which is short for 10^9 \,. Exponent shorthand is useful in writing out molecular formulas for biological entities.
#NameNumber of Atoms
10^2 \,hundred100E2
10^3 \,thousand1,000E3
10^6 \,million1,000,000E6
10^9 \,billion1,000,000,000E9ten bacteria molecules
10^{12} \,trillion1,000,000,000,000E12
10^{15} \,quadrillion1,000,000,000,000,000E15ten pre-aquatic worms
10^{18} \,quintillion1,000,000,000,000,000,000E18
10^{21} \,sextillion1,000,000,000,000,000,000,000E21one small fish
10^{24} \,septillion1,000,000,000,000,000,000,000,000E24
10^{27} \,octillion1,000,000,000,000,000,000,000,000,000E27one person (human molecule)
10^{30} \,nonillion1,000,000,000,000,000,000,000,000,000,000E30
10^{33} \,decillion1,000,000,000,000,000,000,000,000,000,000,000E33
10^{36} \,undecillion1,000,000,000,000,000,000,000,000,000,000,000,000E36
10^{39} \,duodecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000E39
10^{42} \,tredecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000E42
10^{45} \,quattuordecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E45
10^{48} \,quindecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E48
10^{51} \,sexdecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E51one earth molecule (the earth)
10^{54} \,septendecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E54
10^{57} \,octodecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E57one sun molecule (the sun)
10^{60} \,novemdecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E60
10^{63} \,vigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
10^{66} \,unvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
E66the milky way galaxy
10^{69} \,duovigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
10^{72} \,tresvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
10^{75} \,quattuorvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
10^{78} \,quinquavigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
10^{81} \,sesvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
E81the observable universe
10^{84} \,septemvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
Using these larger number names in context, one would say, for instance, that one human molecule is comprised of an octillion atoms, of twenty-six types of 'active' elements.
"I am a molecule!" (Apr 2009) [9:32 min]
"I am not a molecule!" (Apr 2009) [6:10 min]
Objections to
Since the 1809 publication of Goethe's Elective Affinities, wherein the characters are said to mirror the activities and behaviors of the chemicals, there has been a never-ending stream of criticism regarding the chemical nature of the human being. [21] In 1810, for instance, Goethe's fellow author and neighbor Christoph Wieland sent a letter (which he suggested should be burned after it is read) to his close friend German philologist and archeologist Karl Böttiger stating that: [22]
"To all rational readers, the use of the chemical theory is nonsense and childish fooling around."
In modern terms, the debate still continues; where, according to recent Internet polls, about 57% of people agree that they are a giant molecule. [17] Likewise, according to standard molecular evolution tables, it is visually-obvious that humans are evolved molecules. In spite of these known perspectives, many maintain that humans are in some way different than molecules, particularly when it comes to choice and free will.
In 1996, for instance, Austrian-born American theoretical physicist Fritjof Capra stated incorrectly that "human beings can choose whether and how to obey a social rule; molecules cannot choose whether or not they should interact." [18]
In 2005, American science philosopher and sociologist Steve Fuller, a noted intelligent design advocate, published his New Scientist article "I Am Not a Molecule", arguing against atomic reductionism in sociology, used in recent publications, most notably English physical chemist Phillip Ball's 2004 book Critical Mass: How One Thing Leads to Another, American evolutionary biologist Jared Diamond’s 2005 book Collapse: How Societies Choose to Fail or Succeed, and Canadian-born evolutionary psychologist American Steven Pinker’s 2002 book The Blank Slate: the Modern Denial of Human Nature, all of which, according to Fuller, are "infuriating social scientists"; presumably himself, most significantly? [24]
Likewise, in 2007 Canadian chemist Stephen Lower considered the following statement "people are viewed as chemical species, or specifically human molecules, A or B, and processes such as marriage or divorce are viewed as chemical reactions between individuals..." to be crackpot, meaning it is something akin to an eccentric or lunatic notion, and listed it among a grouping of pseudoscience subjects. [19]
See also
Human chemical
Human chemical element
Human element
Human particle
People are not molecules
Social atom
Joseph Dewey
2. (a) Thims, Libb. (2008). The Human Molecule (issuu) (preview) (Google Books) (docstoc). LuLu.
(b) Molecular Evolution Table - Institute of Human Thermodynamics
3. Strathern, Paul. (2000). Mendeleyev’s Dream – the Quest for the Elements. New York: Berkley Book.
4. Darwin, Charles G. (1952). The Next Million Years (pg. 26). London: Rupert Hart-Davis.
5. Carey, George W. (1919). The Chemistry of Human Life. Los Angeles:The Chemistry of Life Co.
6. Rabinbach, A. (1990). The Human Motor – Energy, Fatigue, and the Origins of Modernity. Berkeley: University of California Press.
8. Thims, Libb. (2002). Human Thermodynamics (Volume One). Chicago: Institute of Human Thermodynamics.
9. Müller , Erich. A. (1998). “Human Societies: a Curious Application of Thermodynamics.” Chemical Engineering Education, Vol. 1, No. 3, Summer.
10. Gallagher, Laura. (2006). “A Thermodynamic Personality: Interview with Erich Müller”, Reporter, Issue 162, 24 February.
11. Sterner, Robert W. and Elser, James J. (2002). Ecological Stoichiometry: the Biology of Elements from Molecules to the Biosphere, (chapter one), (pgs. 2-3, 47, 135). Princeton: Princeton University Press.
12. (a) Thims, Libb. (2002). Human Thermodynamics (Volume One), Date: Sept. Chicago: Institute of Human Thermodynamics.
(b) Note: Thims only became aware of Sterner's calculation on February 17, 2008 after doing a Google book search on keywords "human molecule thermodynamics"; Thims then emailed Sterner within the hour (after which Sterner explained how and when he did his calculation).
Author. (2005). “That’s Life”, New Scientist, Dec 03.
13. The formula shown is the more accurate 2005-version (as found in the IoHT's molecular evolution table, ref. #2 above). The 2002 calculation was based on less-accurate mass percent data sets, taking nickel as unit.
14. Molecule (definition): “a molecule may be thought of either as a structure built of atoms bound together by chemical forces or as a structure in which two or more nuclei are maintained in some geometrical configuration by attractive forces from a surrounding swarm of negative electrons.” Source: Licker, Mark D. (2002). McGraw-Hill Concise Encyclopedia of Chemistry. New York: McGraw-Hill.
15. Source for Sterner date: "I have attached the spreadsheet used to construct that formula for a human molecule in our book. My copy of the spreadsheet is dated April 18, 2000. I cannot say exactly when we made the calculations. That date might have to do with some modification of the figure or some other edit. At any rate, it gives an indication." (email communicate from Robert Sterner to Libb Thims on February 20, 2008).
16. (a) Taine, Hippolyte. (1870). De l’Intelligence (On Intelligence), (Part I, Part II), (pg. xi-xii), London: L. Reeve and Co.
(b) Sparks, Jared. (1873). The North American Review, (Section: Taine’s philosophy, pg. 403: keyword: “human molecule”, pg). Vol. CXVII. Boson: James R. Osgood and Co.
17. Running Poll: "Are You A Giant Molecule?" (by English physicist Jim Eadon) - 2001-2008+.
18. Capra, Fritjof. (1996). The Web of Life - a New Scientific Understanding of Living Systems, (pg. 212). New York: Anchor Books.
19. Lower, Stephen. (2007). “List of Flim-flam, Pseudoscience, and Nonsense”, Online listings.
23. Ksenzhek, Octavian S. (2007). Money: Virtual Energy - Economy through the Prism of Thermodynamics, (pgs. 162). Universal Publishers.
Fuller, Steve. (2004). "I am not a molecule", New Scientist, Issue 2502, June 4th.
25. Forbes, Allan. (1999). Milton's Progress (Chapter 21). Rowanlea Grove Press.
26. (a) Gardiner, Martin. (2010). “Inside the IoHT: I am not a molecule (parts 1, 2, 3, 4)”, Improbable Research, Jun 04-06.
(b) Martin Gardiner (about) –
27. Samuels, Ernest. (1989). Henry Adams (human molecule, pg. 115). Harvard University Press.
28. Adams, Henry. (1885). “Letter to Wife”, April 12; In: The Letters of Henry Adams: 1858-1868, Volume 1 (pg. xxviii). Harvard University Press, 1982.
29. Ramsay, William. (1898). “The Kinetic Theory of Gases and Some of its Consequences” (human molecules, pg. 685). The Contemporary Review, 74: 681-91.
30. (a) Patten, William. (1919). “The Message of the Biologist”, Address of the vice-president and chairman of Section F, Zoology, American Association for the Advancement of Science, St. Louis, Jan 31.
(b) Patten, William. (1920). “The Message of the Biologist”, Science, pgs. 93-101, Jan 30.
31. Ksenzhek, Octavian S. (2007). Money: Virtual Energy - Economy through the Prism of Thermodynamics (pgs. 162, 170). Universal Publishers.
32. Prugh, Thomas and Costanza, Robert. (1999). Natural capital and Human Economic Survival (economic molecules, pg. 15; human molecules, pg. 17). CRC Press.
33. Gryzanowski, Ernst. (1875). “Comtism” (human molecules, social molecule, pg. 276). The North American Review, 120: 237-80, April.
34. Schumpeter, Joseph. (1942).Capitalism, Socialism, and Democracy (human molecules, pg. 204). Routledge.
35. Anon. (1894). “As Others See Us” (human molecules, pg. 217), Journal of Education, Apr 01.
36. Leclere, Max. (1894). L’Education des Classes Moyennes et Dirigeantes en Angleterre (Education in the Middle Classes in England and Politics) (molécules humaines, pg. 65). Paris: Armand Colin et Cie.
37. Goleman, Daniel. (1998). Working with Emotional Intelligence (human molecules, pg. 215). Random House.
38. (a) Berlioz, Hector. (1854). “Les Soirees de l’Orchestre” (Evenings with the Orchestra) (molécules humaines, pg. 259), Entierement Revue Et Corrigee.
(b) Novello, Sabilla. (1855). “Translation from Hector Berlioz’s ‘Soireez de l’orchestre’”, The Musical Times, Jul 15.
39. Snow, Alpheus Henry. (1913). “Book Review: A Short History of War and Peace by G. H. Perris.” The American Journal of International Law, 7: 427-29.
40. Perris, George. (1911). A Short History of War and Peace (molecule, pg. 7-8). H. Holt and Co.
41. Hodgson, John. (2002). Little Fun Book of Molecules/Humans. 1st Books.
42. Peachey, Paul. (2001). Leaving and Clinging: the Human Significance of the Conjugal Union (ch. 1: “The Marital Bond as the Human Molecule”, pgs. 3-20). University Press of America.
43. Buchanan, Mark. (2007). The Social Atom: why the Rich get Richer, Cheaters get Caught, and Your Neighbor Usually Looks Like You (pgs. x-xi, 13). New York: Bloomsbury.
44. (a) Schilling, Thomas. (1971). “Dynamic Models of Segregation”, Journal of Mathematical Sociology, 1: 143-86.
(b) Tipping point (sociology) – Wikipedia.
45. Nisbert, Robert, A. (1970). The Social Bond - an Introduction to the Study of Society (social molecule, 38, 45, etc.). New York: Alfred A. Knoph.
46. Sales, Jean. (1798). De la Philosophie de la Nature: ou Traité de morale pour le genre humain, tiré de la philosophie et fondé sur la nature (The Philosophy of Nature: Treatise on Human Moral Nature, from Philosophy and Nature), Volume 4 (molécules humaines, pg. 281). Publisher.
47. Cukier, Rosa. (2007). Words of Jacob Levi Moreno: a Glossary of the Terms used by J. L. Moreno (terms: ambivalence of choice, pg. 42; atom, cultural atom, social atom, pg. 47-51, 358; energy, pg. 141, social entropy, pgs. 88, 360; spontaneity, pg. 393; warming up process, pg. 495-503). Lulu.
48. Dreier, Thomas. (1910). Human Chemicals. (27-pgs). Backbone Society.
49. Dreier, Thomas. (1948). We Human Chemicals: the Knack of Getting Along with Everybody. Updegraff Press.
50. Fairburn, William Armstrong. (1914). Human Chemistry. The Nation Valley Press, Inc.
51. Winiarski, Leon. (1967). Es la Mecanique Sociale (molécules, pgs. 11-12, 34, 95, 163-64, 170, 195). Librairie Droz.
53. Names of larger numbers – Wikipedia.
54. Fountain, Henry. (2009). “Experiments Show that Molecules Can Walk, but Can They Dance?”, New York Times, Science, Apr 07.
55. Hadlington, Simon. (2009). “Two-legged Molecular Walker Takes a Stroll”, Dec 21.
56. Zeleny, Leslie D. (1949). “A Note on the Social Atom: an Illustration”, Sociometry, 12: 341-43.
57. Gladyshev, Georgi P. (2006). "The Principle of Substance Stability is Applicable to all Levels of Organization of Living Matter", International Journal of Molecular Sciences, 7:98-110.
58. Human Molecule (1988 acrylic on canvas) – Authentic Norval Morrisseau Blog.
59. Rousseau, Pierre. (2006). “The Constant flow of Human Molecules”,
60. (a) Murali, D. (2010). “Human particles in the Corporate Molecule”, The Hindu, May 29.
(b) Nayar, Vineet. (2010). Employees First, Customers Second (quote, pg. 165). Harvard Business Books.
61. Carey, H.C. and McKean, Kate. (1874). Manual of Social Science: Being a Condensation of ‘The Principles of Social Science’ of H.C. Carey (ch.1: Social Science, pgs. 25; molecule, pg. 37). Industrial Publisher.
62. Stark, Werner. (1962). The Fundamental Forms of Social Thought. (Carey, 143-59; human molecules, pgs. 87-90, 126, 146 (quote), 243, 25). Routledge.
63. Hodgson, John. (2010). molecules humans.
64. (a) Louis de Broglie – Wikipedia.
(b) Wave-particle duality – Wikipedia.
66. Human molecule (definition) - Human Thermodynamics Glossary.
67. Macrone, Michael and Lulevitch, Tom. (1999). Eureka!: 81 Key Ideas Explained (section: Entropy, pgs. 129-33; image pg. 130). Barnes & Noble Publishing.
68. Staff. (1953). “Science: the Million-Year Prophecy”, Time, Jan 19.
69. Mesny, Mary B. (1910). “Human Molecules”, The Smart Set: a Magazine of Cleverness, 31:100.
70. Author. (2005). “That’s Life”, New Scientist, Dec 03.
72. Silva, Carlos A. and Annamalai, Kalyan. (2009). “Entropy Generation and Human Aging: Lifespan Entropy and Effect of Diet Composition and Caloric Restriction Diets”, Journal of Human Thermodynamics, Jan 23.
Further reading
Humans resemble molecules? , Pravda, 24 May 2004.
● Ockham, Edward. (2012). “The Human Molecule”, Beyond Necessity: Philosophy, Medieval Logic and the London Plumbing Crisis,, May 13.
External links
The Human Molecule -
Human molecules - WikiSocial, a Wikia wiki.
The Human Molecule (discussion thread) –
Human molecule –
EoHT symbol
Latest page update: made by Sadi-Carnot , Feb 5 2014, 7:00 AM EST (about this update About This Update Sadi-Carnot Edited by Sadi-Carnot
5 words added
84 images added
84 images deleted
3 widgets added
3 widgets deleted
view changes
- complete history)
More Info: links to this page
Started By Thread Subject Replies Last Post
Petrologist Why I'm not a molecule 19 Nov 9 2011, 10:17 AM EST by Sadi-Carnot
Thread started: Aug 24 2009, 9:24 AM EDT Watch
First, I think that using thermodynamics, with its concept of efficiency, might be a very productive way of exploring some sciences other than physical. To show this, one needs to create at least one theory, test it whenever possible, and show it makes interesting, new predictions of value to us.
To do the above, one need not slavishly adopt terminology from the physical sciences. When I write about the thermodynamics of rock interactions, I find it convenient to create new terms. These terms, however, are of value only if they are synonyms of terms in other sciences, mathematics, or everyday language.
Here the human molecule of the social sciences fails. Sometimes, when discussing giving birth or dying, one want instead the chemical substances of human beings, as the quantity or mass of the unit NaCL in a halite-bearing rock. At other times, one wants a collection of separate but equivalent entities whose bonds one can define in psychological terms. These should be different terms.
Each of the above have some properties of a molecule, but not all. 'Molecule' now brings to the mind a discrete substance (floating about) made of the same number & kinds of atoms, bonded in the same manner. They differ only in the physical properties 'isotopic mass' and 'handedness'. Geologists use instead 'substance', a much more flexible term. Substances react, and classical thermodynamics studies them. Chemical formulae above represent chemical compositions of the human substance.
I have no suggestion for people in a crowd but, perhaps, 'person'.
These are just first impressions. I might change my mind if 'flash' worked on my computer. :-)
4 out of 7 found this valuable. Do you?
Show Last Reply
D.Boss orbitals 1 Aug 4 2012, 8:35 AM EDT by Sadi-Carnot
Thread started: Aug 4 2012, 7:50 AM EDT Watch
can you point to applications of the human molecular orbitals? seems like an interesting theory.
Do you find this valuable?
Show Last Reply
Anonymous factors outside chemistry 1 Jun 22 2012, 8:22 AM EDT by Sadi-Carnot
Thread started: Jun 21 2012, 4:28 PM EDT Watch
hi. I was wondering if you acknowledge other factors than chemistry? for instance, if we have the human molecule, do you think that can explain and predict everything that a human consists of?
Do you find this valuable?
Show Last Reply
GGladyshev The term of “Human molecule” 5 Oct 20 2010, 4:52 AM EDT by GGladyshev
Thread started: Jul 3 2010, 6:05 AM EDT Watch
I believe that we should make a note to term of “Human molecule”. “Human molecule” is a particle of variable chemical and supramolecular compositions. These compositions are depend on the age of body. The same situation takes place in phylogenies and evolution. This is the consequence of action of hierarchical thermodynamics. First it was shown in my work (Gladyshev G.P. On the Thermodynamics of Biological Evolution // Journal of Theoretical Biology.- 1978.- Vol. 75.- Issue 4.- Dec 21.-P. 425-441. Preprint, May, 1977.” On the thermodynamics of biological evolution ”).
Then there were some publications which conform this on the basis of quantitative calculations. (Gladyshev G.P. On the Principle of Substance Stability and Thermodynamic Feedback // Biology Bulletin, 2002.- Vol. 29, No. 1, pp. 1-4 ).
I will be in Moscow in September only.
Do you find this valuable?
Show Last Reply
Anonymous (Get credit for your thread)
Showing 1 of 1 featured threads and the last 3 of 4 threads for this page - view all
Related Content
|
55f1c9566aec805b | Sturm–Liouville theory
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In mathematics and its applications, a classical Sturm–Liouville equation, named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882), is a real second-order linear differential equation of the form
-\frac{\mathrm{d}}{\mathrm{d}x}\left[p(x)\frac{\mathrm{d}y}{\mathrm{d}x}\right]+q(x)y=\lambda w(x)y,
where y is a function of the free variable x. Here the functions p(x), q(x), and w(x) > 0 are specified at the outset. In the simplest of cases all coefficients are continuous on the finite closed interval a,b, and p has continuous derivative. In this simplest of all cases, this function "y" is called a solution if it is continuously differentiable on (a,b) and satisfies the equation (1) at every point in (a,b). In addition, the unknown function y is typically required to satisfy some boundary conditions at a and b. The function w(x), which is sometimes called r(x), is called the "weight" or "density" function.
The value of λ is not specified in the equation; finding the values of λ for which there exists a non-trivial solution of (1) satisfying the boundary conditions is part of the problem called the Sturm–Liouville (S–L) problem.
Such values of λ when they exist are called the eigenvalues of the boundary value problem defined by (1) and the prescribed set of boundary conditions. The corresponding solutions (for such a λ) are the eigenfunctions of this problem. Under normal assumptions on the coefficient functions p(x), q(x), and w(x) above, they induce a Hermitian differential operator in some function space defined by boundary conditions. The resulting theory of the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in a suitable function space became known as Sturm–Liouville theory. This theory is important in applied mathematics, where S–L problems occur very commonly, particularly when dealing with linear partial differential equations that are separable.
A Sturm–Liouville (S–L) problem is said to be regular if p(x), w(x) > 0, and p(x), p'(x), q(x), and w(x) are continuous functions over the finite interval ab, and have separated boundary conditions of the form
Under the assumption that the S–L problem is regular, the main tenet of Sturm–Liouville theory states that:
• The eigenvalues λ1, λ2, λ3, ... of the regular Sturm–Liouville problem (1)-(2)-(3) are real and can be ordered such that
\lambda_1 < \lambda_2 < \lambda_3 < \cdots < \lambda_n < \cdots \to \infty;
• Corresponding to each eigenvalue λn is a unique (up to a normalization constant) eigenfunction yn(x) which has exactly n − 1 zeros in (ab). The eigenfunction yn(x) is called the n-th fundamental solution satisfying the regular Sturm–Liouville problem (1)-(2)-(3).
\int_a^b y_n(x)y_m(x)w(x)\,\mathrm{d}x = \delta_{mn},
in the Hilbert space L2(ab, w(x)dx). Here δmn is a Kronecker delta.
Note that, unless p(x) is continuously differentiable and q(x), w(x) are continuous, the equation has to be understood in a weak sense.
Sturm–Liouville form
The differential equation (1) is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear ordinary differential equations can be recast in the form on the left-hand side of (1) by multiplying both sides of the equation by an appropriate integrating factor (although the same is not true of second-order partial differential equations, or if y is a vector.)
The Bessel equation
x^2y''+xy'+\left (x^2-\nu^2 \right )y=0
which can be written in Sturm–Liouville form as
(xy')'+ \left (x-\frac{\nu^2}{x}\right )y=0.
The Legendre equation
which can easily be put into Sturm–Liouville form, since D(1 − x2) = −2x, so, the Legendre equation is equivalent to
An example using an integrating factor
Divide throughout by x3:
Multiplying throughout by an integrating factor of
\mu(x) = e^{\int -\frac{x}{x^3}\,\mathrm{d}x}=e^{\int -\frac{1}{x^2}\, \mathrm{d}x}=e^{\frac{1}{x}},
e^{\frac{1}{x}}y''-\frac{e^{\frac{1}{x}}}{x^2} y'+ \frac{2 e^{\frac{1}{x}}}{x^3} y = 0
which can be easily put into Sturm–Liouville form since
D e^{\frac{1}{x}} = -\frac{e^{\frac{1}{x}}}{x^2}
so the differential equation is equivalent to
(e^{\frac{1}{x}}y')'+\frac{2 e^{\frac{1}{x}}}{x^3} y =0.
The integrating factor for a general second order differential equation
multiplying through by the integrating factor
\mu(x) = \frac{1}{P(x)} e^{\int \frac{Q(x)}{P(x)} \mathrm{d}x},
and then collecting gives the Sturm–Liouville form:
\frac{d}{dx} (\mu(x)P(x)y')+\mu(x)R(x)y=0
or, explicitly,
\frac{d}{dx} \left (e^{\int \frac{Q(x)}{P(x)} \mathrm{d}x}y' \right )+\frac{R(x)}{P(x)} e^{\int \frac{Q(x)}{P(x)}\,\mathrm{d}x} y = 0
Sturm–Liouville equations as self-adjoint differential operators
The map
Lu = \frac{1}{w(x)} \left(-\frac{\mathrm{d}}{\mathrm{d}x}\left[p(x)\frac{\mathrm{d}u}{\mathrm{d}x}\right]+q(x)u \right)
can be viewed as a linear operator mapping a function u to another function Lu. We may study this linear operator in the context of functional analysis. In fact, equation (1) can be written as
L u = \lambda u.
This is precisely the eigenvalue problem; that is, we are trying to find the eigenvalues λ1, λ2, λ3, ... and the corresponding eigenvectors u1, u2, u3, ... of the L operator. The proper setting for this problem is the Hilbert space L2(a, b, w(x) dx) with scalar product
\langle f, g\rangle = \int_{a}^{b} \overline{f(x)} g(x)w(x)\,\mathrm{d}x.
In this space L is defined on sufficiently smooth functions which satisfy the above boundary conditions. Moreover, L gives rise to a self-adjoint operator. This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of L corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem one looks at the resolvent
(L - z)^{-1}, \qquad z \in\mathbb{C},
where z is chosen to be some real number which is not an eigenvalue. Then, computing the resolvent amounts to solving the inhomogeneous equation, which can be done using the variation of parameters formula. This shows that the resolvent is an integral operator with a continuous symmetric kernel (the Green's function of the problem). As a consequence of the Arzelà–Ascoli theorem this integral operator is compact and existence of a sequence of eigenvalues αn which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that
(L-z)^{-1} u = \alpha u, \qquad L u = \left (z+\alpha^{-1} \right ) u,
are equivalent.
If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls L singular. In this case the spectrum does no longer consist of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in quantum mechanics, since the one-dimensional time-independent Schrödinger equation is a special case of a S–L equation.
We wish to find a function u(x) which solves the following Sturm–Liouville problem:
L u = -\frac{\mathrm{d}^2u}{\mathrm{d}x^2} = \lambda u
where the unknowns are λ and u(x). As above, we must add boundary conditions, we take for example
u(0) = u(\pi) = 0.
Observe that if k is any integer, then the function
u(x) = \sin kx
is a solution with eigenvalue λ = k2. We know that the solutions of a S–L problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the S–L problem in this case has no other eigenvectors.
Given the preceding, let us now solve the inhomogeneous problem
L u =x, \qquad x\in(0,\pi)
with the same boundary conditions. In this case, we must write f(x) = x in a Fourier series. The reader may check, either by integrating ∫exp(ikx)x dx or by consulting a table of Fourier transforms, that we thus obtain
L u =\sum_{k=1}^{\infty}-2\frac{(-1)^k}{k}\sin kx.
This particular Fourier series is troublesome because of its poor convergence properties. It is not clear a priori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "square-summable", the Fourier series converges in L2 which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier's series converges at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at π) converges to the average of the left and right limits (see convergence of Fourier series).
Therefore, by using formula (4), we obtain that the solution is
u=\sum_{k=1}^{\infty}2\frac{(-1)^k}{k^3}\sin kx.
In this case, we could have found the answer using anti-differentiation. This technique yields
u= \tfrac{1}{6} \left (x^3 -\pi^2 x \right),
whose Fourier series agrees with the solution we found. The anti-differentiation technique is no longer useful in most cases when the differential equation is in many variables.
Application to normal modes
Certain partial differential equations can be solved with the help of S–L theory. Suppose we are interested in the modes of vibration of a thin membrane, held in a rectangular frame, 0 ≤ x ≤ L1, 0 ≤ y ≤ L2. The equation of motion for the vertical membrane's displacement, W(x, y, t) is given by the wave equation:
\frac{\partial^2W}{\partial x^2}+\frac{\partial^2W}{\partial y^2} = \frac{1}{c^2}\frac{\partial^2W}{\partial t^2}.
The method of separation of variables suggests looking first for solutions of the simple form W = X(x) × Y(y) × T(t). For such a function W the partial differential equation becomes X"/X + Y"/Y = (1/c2)T"/T. Since the three terms of this equation are functions of x,y,t separately, they must be constants. For example, the first term gives X" = λX for a constant λ. The boundary conditions ("held in a rectangular frame") are W = 0 when x = 0, L1 or y = 0, L2 and define the simplest possible S–L eigenvalue problems as in the example, yielding the "normal mode solutions" for W with harmonic time dependence,
W_{mn}(x,y,t) = A_{mn}\sin\left(\frac{m\pi x}{L_1}\right)\sin\left(\frac{n\pi y}{L_2}\right)\cos\left(\omega_{mn}t\right)
where m and n are non-zero integers, Amn are arbitrary constants, and
\omega^2_{mn} = c^2 \left(\frac{m^2\pi^2}{L_1^2}+\frac{n^2\pi^2}{L_2^2}\right).
The functions Wmn form a basis for the Hilbert space of (generalized) solutions of the wave equation; that is, an arbitrary solution W can be decomposed into a sum of these modes, which vibrate at their individual frequencies \omega_{mn}. This representation may require a convergent infinite sum.
Representation of solutions and numerical calculation
The Sturm–Liouville differential equation (1) with boundary conditions may be solved in practice by a variety of numerical methods. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places.
1. Shooting methods.12 These methods proceed by guessing a value of λ, solving an initial value problem defined by the boundary conditions at one endpoint, say, a, of the interval ab, comparing the value this solution takes at the other endpoint b with the other desired boundary condition, and finally increasing or decreasing λ as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues.
2. Finite difference method.
3. The Spectral Parameter Power Series (SPPS) method3 makes use of a generalization of the following fact about second order ordinary differential equations: if y is a solution which does not vanish at any point of a,b, then the function
y(x) \int_a^x \frac{\mathrm{d}t}{p(t)y(t)^2}
is a solution of the same equation and is linearly independent from y. Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value λ0* (often λ0* = 0; it does not need to be an eigenvalue) and any solution y0 of (1) with λ = λ0* which does not vanish on ab. (Discussion below of ways to find appropriate y0 and λ0*.) Two sequences of functions X(n)(t), X~(n)(t) on ab, referred to as iterated integrals, are defined recursively as follows. First when n = 0, they are taken to be identically equal to 1 on ab. To obtain the next functions they are multiplied alternately by 1/(py02) and wy02 and integrated, specifically
X^{(n)}(t) = \begin{cases} - \int_a^x X^{(n-1)}(t) p(t)^{-1} y_0(t)^{-2}\,\mathrm{d}t & n \text{ odd}, \\
\int_a^x X^{(n-1)}(t)y_0(t)^{2} w(t) \,\mathrm{d}t & n \text{ even} \end{cases}
\widetilde X^{(n)}(t) = \begin{cases} \int_a^x \widetilde X^{(n-1)}(t)y_0(t)^{2} w(t)\,\mathrm{d}t &n \text{ odd}, \\
when n > 0. The resulting iterated integrals are now applied as coefficients in the following two power series in λ:
u_0 = y_0 \sum_{k=0}^\infty \left (\lambda-\lambda_0^* \right )^k \widetilde X^{(2k)},
u_1 = y_0 \sum_{k=0}^\infty \left (\lambda-\lambda_0^* \right )^k X^{(2k+1)}.
Then for any λ (real or complex), u0 and u1 are linearly independent solutions of the corresponding equation (1). (The functions p(x) and q(x) take part in this construction through their influence on the choice of y0.)
Next one chooses coefficients c0, c1 so that the combination y = c0u0 + c1u1 satisfies the first boundary condition (2). This is simple to do since X(n)(a) = 0 and X~(n)(a) = 0, for n > 0. The values of X(n)(b) and X~(n)(b) provide the values of u0(b) and u1(b) and the derivatives u0'(b) and u1'(b), so the second boundary condition (3) becomes an equation in a power series in λ. For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in λ whose roots are approximations of the sought-after eigenvalues.
When λ = λ0, this reduces to the original construction described above for a solution linearly independent to a given one. The representations (5),(6) also have theoretical applications in Sturm–Liouville theory.3
Construction of a nonvanishing solution
The SPPS method can, itself, be used to find a starting solution y0. Consider the equation (py')' = μqy; i.e., q, w, and λ are replaced in (1) by 0, −q, and μ respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue μ0 = 0. While there is no guarantee that u0 or u1 will not vanish, the complex function y0 = u0 + iu1 will never vanish because two linearly independent solutions of a regular S–L equation cannot vanish simultaneously as a consequence of the Sturm separation theorem. This trick gives a solution y0 of (1) for the value λ0 = 0. In practice if (1) has real coefficients, the solutions based on y0 will have very small imaginary parts which must be discarded.
Application to PDEs
For a linear second order in one spatial dimension and first order in time of the form:
f(x) \frac{\partial^2 u}{\partial x^2} + g(x) \frac{\partial u}{\partial x}+h(x) u= \frac{\partial u}{\partial t}+k(t) u
Let us apply separation of variables, which in doing we must impose that:
u(x,t) =X(x) T(t)
Then our above PDE may be written as:
\frac{\hat{L} X(x)}{X(x)} = \frac{\hat{M} T(t)}{T(t)}
\hat{L}=f(x) \frac{\mathrm{d}^2}{\mathrm{d} x^2}+g(x) \frac{\mathrm{d}}{\mathrm{d}x}+h(x), \qquad \hat{M}=\frac{\mathrm{d}}{\mathrm{d}t} +k(t)
Since, by definition, \hat{L} and X(x) are independent of time t and \hat{M} and T(t) are independent of position x, then both sides of the above equation must be equal to a constant:
\hat{L} X(x) =\lambda X(x)
X(a)=X(b)=0 \,
\hat{M} T(t) =\lambda T(t) \,
The first of these equations must be solved as a Sturm–Liouville problem. Since there is no general analytic (exact) solution to Sturm–Liouville problems, we can assume we already have the solution to this problem, that is, we have the eigenfunctions X_n (x) and eigenvalues \lambda_n . The second of these equations can be analytically solved once the eigenvalues are known.
\frac{\mathrm{d}}{\mathrm{d}t} T_n (t)= (\lambda_n -k(t)) T_n (t)
T_n (t) = a_n e^{-\left(\lambda_n t -\int_0^t k(\tau) \mathrm{d}\tau\right)}
u(x,t) =\sum_n a_n X_n (x) e^{-\left(\lambda_n t -\int_0^t k(\tau) \mathrm{d}\tau\right)}
a_n =\frac{(X_n (x), s(x))}{(X_n(x),X_n (x))}
(y(x),z(x))=\int_a^b y(x) z(x) w(x) \mathrm{d}x
w(x)= \frac{e^{\int \frac{g(x)}{f(x)} \mathrm{d}x}}{f(x)}
See also
1. ^ J. D. Pryce, Numerical Solution of Sturm–Liouville Problems, Clarendon Press, Oxford, 1993.
2. ^ V. Ledoux, M. Van Daele, G. Vanden Berghe, "Efficient computation of high index Sturm–Liouville eigenvalues for problems in physics," Comput. Phys. Comm. 180, 2009, 532–554.
3. ^ a b V. V. Kravchenko, R. M. Porter, "Spectral parameter power series for Sturm–Liouville problems," Mathematical Methods in the Applied Sciences (MMAS) 33, 2010, 459–468
Further reading
Creative Commons License |
fab2c8a25d0c025e | Time evolution of the density operator next up previous
Next: The quantum equilibrium ensembles Up: Principles of quantum statistical Previous: The density matrix and
Time evolution of the density operator
The time evolution of the operator tex2html_wrap_inline509 can be predicted directly from the Schrödinger equation. Since tex2html_wrap_inline599 is given by
the time derivative is given by
where the second line follows from the fact that the Schrödinger equation for the bra state vector tex2html_wrap_inline601 is
Note that the equation of motion for tex2html_wrap_inline599 differs from the usual Heisenberg equation by a minus sign! Since tex2html_wrap_inline599 is constructed from state vectors, it is not an observable like other hermitian operators, so there is no reason to expect that its time evolution will be the same. The general solution to its equation of motion is
The equation of motion for tex2html_wrap_inline599 can be cast into a quantum Liouville equation by introducing an operator
In term of iL, it can be seen that tex2html_wrap_inline599 satisfies
What kind of operator is iL? It acts on an operator and returns another operator. Thus, it is not an operator in the ordinary sense, but is known as a superoperator or tetradic operator (see S. Mukamel, Principles of Nonlinear Optical Spectroscopy, Oxford University Press, New York (1995)).
Defining the evolution equation for tex2html_wrap_inline509 this way, we have a perfect analogy between the density matrix and the state vector. The two equations of motion are
We also have an analogy with the evolution of the classical phase space distribution tex2html_wrap_inline617 , which satisfies
with tex2html_wrap_inline619 being the classical Liouville operator. Again, we see that the limit of a commutator is the classical Poisson bracket.
Mark Tuckerman
Tue May 9 19:40:24 EDT 2000 |
4eb10f8cafdaeef8 | Mike the Mad Biologist
No, We Don’t Need to Slow Down Moore’s Law
Matthew Yglesias
writes regarding Moore’s Law, which states that CPU transistor counts double every two years:
My pet notion is that improvements in computer power have been, in some sense, come along at an un-optimally rapid pace. To actually think up smart new ways to deploy new technology, then persuade some other people to listen to you, then implement the change, then have the competitive advantage this gives you play out in the form of increased market share takes time. The underlying technology is changing so rapidly that it may not be fully worthwhile to spend a lot of time thinking about optimizing the use of existing technology. And senior officials in large influential organizations may simply be uncomfortable with state of the art stuff. But the really big economic gains come not from the existence of new technologies but their actual use to accomplish something. So I conjecture that if after doubling, then doubling again, then doubling a third time the frontier starts advancing more slowly we might actually start to see more “real world” gains as people come up with better ideas for what to do with all this computing power.
The problem is that we know what to do with this power, we just need more of it. Actually, we don’t even have enough power to handle the data we currently generate. Consider computation in genomics, something I’ve discussed before. Here’s how Moore’s Law holds up against DNA sequencing:
Sequencing graphs to slides
I think this figure is a little optimistic–I think you need more sequence than NHGRI claims, so multiply the cost three-fold. But that really doesn’t change anything dramatically. And as I’ve noted before, this doesn’t include all of the costs of sequencing (see here for what is and isn’t included). And let’s not even get started on read-write times to hard drives. We’ll just pretend that magically happens.
By the way, physicists have it worse….
Anyway, the point is that we really do need more powerful computers regarding “their actual use to accomplish something.” Slowing down would be a really bad thing.
1. #1 Astrology 2011
February 23, 2011
Thank you for inf.
2. #2 daedalus2u
February 23, 2011
Physics is trivial compared to physiology.
There are what, a couple dozen fundamental particles? The wave functions are all linear combinations?
In physiology nothing is linear. The human genome is tens of thousands of genes and who knows how much DNA that is still mostly a complete mystery. A DNA sequence only gives you an amino acid sequence, it doesn’t give you the protein shape, you have to fold it to get the shape and function.
When are we going to be able to model something with tens of thousands of non-linear and coupled parameters per cell?
The answer is never, unless Moore’s Law extends for a few millennia.
3. #3 Jim Lund
February 23, 2011
The problem is AFAIK bigger in biology. Here’s the process for genomic sequencing: 1) break sequence into small pieces, 2) generate sequence, 3) reassemble sequence, 4) annotate sequence, 5) compare to all existing genomic sequences.
#5, that one eats up computation, so much so that people usually pare it down to 5) compare it to one or two other genomic sequences.
4. #4 Eric Lund
February 23, 2011
@daedalus: The complications in physics are different from the complications in physiology, but physics can definitely benefit from the continuation of Moore’s law. Unfortunately, physics simultaneously predicts a hard limit to Moore’s law. You cannot make a transistor smaller than a single atom or molecule, no matter what you do.
Some of the issues in physics:
1. Many-body systems. Even a simple system like a three-body gravitational interaction cannot be solved analytically, nor can the Schrödinger equation for anything more complicated than a hydrogen atom. I know people who routinely run simulations involving millions of particles, and some simulations push that up to billions.
2. Nonlinearity. Yes, there is a lot of physics where you can’t just construct every possible solution from a linear combination of other solutions. This crops up most often in areas like fluid mechanics; the Navier-Stokes equation is fundamentally nonlinear (and you can’t linearize it without discarding a boundary condition).
3. Strongly interacting systems. Anybody doing QCD calculations has to deal with this problem.
4. Large data sets. Many particle physics experiments generate such a quantity of data (hundreds of terabytes per day without filtering) that most of it has to be discarded without a human ever looking at it. So the team tries to figure out how to tell the computer what constitutes an “interesting” event, at the risk of discarding some serendipitous discovery in the process.
Not that physiology isn’t hard, but don’t assume physics is easy.
5. #5 Nomen Nescio
February 23, 2011
throwing more hardware at a difficult problem is a brute-force, primitive sort of solution. it’s necessary if the problem is genuinely, fundamentally difficult, such that no other solution but brute force exists — and this may well be the case in genomics, i don’t know enough about that problem space to tell.
but in the general case, problems tend not to be fundamentally hard, and it’s often possible to get a much greater performance boost through algorithmic improvements than through buying more hardware. so while specific problem domains may need all the Moore’s law improvements they can get — climate research, possibly genomics, probably physics — most problems could likely benefit more from better fundamental computer science efforts.
summary: i think Yglesias is correct in general, even if there will be notable specific exceptions to this rule.
6. #6 JohnV
February 23, 2011
Nomen its a lot easier (as biologists) to buy new hardware than it is to find some programmers who are able to cook up some new fancy pants algorithm.
7. #7 daedalus2u
February 23, 2011
Eric, I wasn’t assuming physics was easy, just compared to physiology. I know that Moore’s Law can’t go on for millennia, and that even if it did, there are some problems that a brute force approach cannot solve.
I think it is the computer jocks who don’t know what to do with the increases in hardware capabilities except to fill it up with bloat-ware. Sort of like Bill Gates once saying that “640K ought to be enough for anybody”.
8. #8 Joseph
February 23, 2011
Just not a physicist this time. :)
(oh life; why do you have to emulate xkcd so closely?)
9. #9 Joseph
February 23, 2011
I’m going to go with the following answer: the same time we can use pseudopotentials to accurately model a Pentium chip.
10. #10 Joe Shelby
February 23, 2011
In an interview with Bob Cringely* for Triumph of the Nerds (PBS Documentary), Apple co-founder Steve Wozniak** said he personally was looking forward to the day that Moore’s Law finally kicks in, because that will be the point where software finally learns it has to grow up because it can’t rely on CPU speed improvements anymore.
Mind you, he said that sometime between 1992 and 1998, when the Pentium, the PowerPC and the Alpha, all were so fast already that it was evidence that the CPU was no longer the real bottleneck. Today, the bottleneck is actually getting data into the CPU and getting it out again – the bus, the disk drives, the memory, and today the network. I could (almost) watch a 1090p movie on a Pentium I 15 years ago, if it weren’t for the slow bus, the slow video controller card, and the lack of a hard drive large enough to hold the dang thing.
Granted, that’s watch a movie. Not process it. The issue with large-data apps is that it is actually too hard for HUMANS to actually determine what all that data actually means. So when this guy says he wants to slow Moore’s law down, I wonder if he really is worried about speed or if he’s just worried that he and people like him will be left behind by how much MORE information needs to be defined from the ever-increasing amount of data available.
*not the person who still uses that name at Infoworld
** I’m very impressed that Wozniak is in the Firefox built-in spell checker dictionary…and Infoworld isn’t.
11. #11 Nemo
February 24, 2011
Mike, I don’t understand how the graph supports your point. It shows the cost of sequencing declining, tracking Moore’s Law until Oct. 2007, after which it falls even more rapidly, at least until near the end.
12. #12 Nomen Nescio
February 24, 2011
Nemo, that’s the cost of sequencing per megabase sequenced. in essence, that’s the cost of data generated dropping much faster than the cost of the hardware used to process the data, ergo, we’re generating much more data than we get CPU to analyze it with — dollar for dollar.
UNLESS someone can come up with some neato algorithmic improvement to speed up the genomics data analysis without needing to throw more CPU at it, brute-force style. but depending on just what that data analysis entails, such cleverness may not be possible. if the problem is NP-complete, for instance, the genomicists are stuck buying more CPU and that’s that — i honestly don’t know if that’s so or not.
13. #13 S. Pelech - Kinexus
February 24, 2011
Moore’s Law was originally based on the observed the reduction in the cost of computing power (such as processing speed or memory), but it actually applies to a wide range of other technologies. Interestingly, the time span for a 50% drop in computing power, about 1.8 to 2 years, is almost the same as the decline in the costs of DNA sequencing (per finished base pair) over the last 20 years, although there has been a further marked reduction in DNA sequencing costs in the last couple of years as noted in your figure. Also during the last two decades, the size of the Internet has been doubling every 5.3 years and the amount of data transmitted over the web on average has doubled each year.
It is likely that the steady and dramatic improvements in diverse technologies, and with this world knowledge, has arisen from the synergies provided by the intersections of these technologies. For example, advances in DNA sequencing would not have been feasible without improvements in computing power. It also appears that recent improvements in proteomics, for example with mass spectrometry or production of specific antibodies, would not have been possible without gene sequence data.
The real problem arises when some areas of science and technology become underfunded or relatively neglected relative to other, more outwardly sexier endeavors that suck up the lion’s share of support. I find it intriguing that in the biomedical field, in the US and Canada, there is an over-emphasis on developing genomic approaches and solutions. By contrast, in Europe, there appears to be a much stronger tradition for the study of proteins and small molecules.
In the ultimate search for solutions and understanding, under-explored areas of scientific enquiry could become the bottlenecks that severely compromise realization of the true value of the public investment in science and engineering. For example, the actual rate of improvements in the diagnosis and treatment of most diseases is still pathetically slow over the last few decades. It would be more prudent to take a more balanced approach in the funding of scientific research if the ultimate goal is real improvements in the health and welfare of humans and the other species on this planet. |
e855a3e8c8ff60a8 | The Many Worlds of Hugh Everett
After his now celebrated theory of multiple universes met scorn, Hugh Everett abandoned the world of academic physics. He turned to top secret military research and led a tragic private life
Hugh Everett
Editor's Note: This story was originally printed in the December 2007 issue of Scientific American and is being reposted from our archive in light of a new documentary on PBS, Parallel Worlds, Parallel Lives.
Hugh Everett III was a brilliant mathematician, an iconoclastic quantum theorist and, later, a successful defense contractor with access to the nation’s most sensitive military secrets. He introduced a new conception of reality to physics and influenced the course of world history at a time when nuclear Armageddon loomed large. To science-fiction aficionados, he remains a folk hero: the man who invented a quantum theory of multiple universes. To his children, he was someone else again: an emotionally unavailable father; “a lump of furniture sitting at the dining room table,” cigarette in hand. He was also a chain-smoking alcoholic who died prematurely.
At least that is how his history played out in our fork of the universe. If the many-worlds theory that Everett developed when he was a student at Princeton University in the mid-1950s is correct, his life took many other turns in an unfathomable number of branching universes.
Everett’s revolutionary analysis broke apart a theoretical logjam in interpreting the how of quantum mechanics. Although the many-worlds idea is by no means universally accepted even today, his methods in devising the theory presaged the concept of quantum decoherence— a modern explanation of why the probabilistic weirdness of quantum mechanics resolves itself into the concrete world of our experience.
Everett’s work is well known in physics and philosophical circles, but the tale of its discovery and of the rest of his life is known by relatively few. Archival research by Russian historian Eugene Shikhovtsev, myself and others and interviews I conducted with the late scientist’s colleagues and friends, as well as with his rock-musician son, unveil the story of a radiant intelligence extinguished all too soon by personal demons.
Ridiculous Things
Everett’s scientific journey began one night in 1954, he recounted two decades later, “after a slosh or two of sherry.” He and his Princeton classmate Charles Misner and a visitor named Aage Petersen (then an assistant to Niels Bohr) were thinking up “ridiculous things about the implications of quantum mechanics.” During this session Everett had the basic idea behind the many-worlds theory, and in the weeks that followed he began developing it into a dissertation.
The core of the idea was to interpret what the equations of quantum mechanics represent in the real world by having the mathematics of the theory itself show the way instead of by appending interpretational hypotheses to the math. In this way, the young man challenged the physics establishment of the day to reconsider its foundational notion of what constitutes physical reality.
In pursuing this endeavor, Everett boldly tackled the notorious measurement problem in quantum mechanics, which had bedeviled physicists since the 1920s. In a nutshell, the problem arises from a contradiction between how elementary particles (such as electrons and photons) interact at the microscopic, quantum level of reality and what happens when the particles are measured from the macroscopic, classical level. In the quantum world, an elementary particle, or a collection of such particles, can exist in a superposition of two or more possible states of being. An electron, for example, can be in a superposition of different locations, velocities and orientations of its spin. Yet anytime scientists measure one of these properties with precision, they see a definite result—just one of the elements of the superposition, not a combination of them. Nor do we ever see macroscopic objects in superpositions. The measurement problem boils down to this question: How and why does the unique world of our experience emerge from the multiplicities of alternatives available in the superposed quantum world?
Physicists use mathematical entities called wave functions to represent quantum states. A wave function can be thought of as a list of all the possible configurations of a superposed quantum system, along with numbers that give the probability of each configuration’s being the one, seemingly selected at random, that we will detect if we measure the system. The wave function treats each element of the superposition as equally real, if not necessarily equally probable from our point of view.
The Schrödinger equation delineates how a quantum system’s wave function will change through time, an evolution that it predicts will be smooth and deterministic (that is, with no randomness). But that elegant mathematics seems to contradict what happens when humans observe a quantum system, such as an electron, with a scientific instrument (which itself may be regarded as a quantum-mechanical system). For at the moment of measurement, the wave function describing the superposition of alternatives appears to collapse into one member of the superposition, thereby interrupting the smooth evolution of the wave function and introducing discontinuity. A single measurement outcome emerges, banishing all the other possibilities from classically described reality. Which alternative is produced at the moment of measurement appears to be arbitrary; its selection does not evolve logically from the information- packed wave function of the electron before measurement. Nor does the mathematics of collapse emerge from the seamless flow of the Schrödinger equation. In fact, collapse has to be added as a postulate, as an additional process that seems to violate the equation.
Universal Wave Function
In stark contrast, Everett addressed the measurement problem by merging the microscopic and macroscopic worlds. He made the observer an integral part of the system observed, introducing a universal wave function that links observers and objects as parts of a single quantum system. He described the macroscopic world quantum mechanically and thought of large objects as existing in quantum superpositions as well. Breaking with Bohr and Heisenberg, he dispensed with the need for the discontinuity of a wave-function collapse.
Everett’s radical new idea was to ask, What if the continuous evolution of a wave function is not interrupted by acts of measurement? What if the Schrödinger equation always applies and applies to everything—objects and observers alike? What if no elements of superpositions are ever banished from reality? What would such a world appear like to us?
Everett saw that under those assumptions, the wave function of an observer would, in effect, bifurcate at each interaction of the observer with a superposed object. The universal wave function would contain branches for every alternative making up the object’s superposition. Each branch has its own copy of the observer, a copy that perceived one of those alternatives as the outcome. According to a fundamental mathematical property of the Schrödinger equation, once formed, the branches do not influence one another. Thus, each branch embarks on a different future, independently of the others.
Consider a person measuring a particle that is in a superposition of two states, such as an electron in a superposition of location A and location B. In one branch, the person perceives that the electron is at A. In a nearly identical branch, a copy of the person perceives that the same electron is at B. Each copy of the person perceives herself or himself as being one of a kind and sees chance as cooking up one reality from a menu of physical possibilities, even though, in the full reality, every alternative on the menu happens.
Explaining how we would perceive such a universe requires putting an observer into the picture. But the branching process happens regardless of whether a human being is present. In general, at each interaction between physical systems the total wave function of the combined systems would tend to bifurcate in this way. Today’s understanding of how the branches become independent and each turn out looking like the classical reality we are accustomed to is known as decoherence theory. It is an accepted part of standard modern quantum theory, although not everyone agrees with the Everettian interpretation that all the branches represent realities that exist.
Everett was not the first physicist to criticize the Copenhagen collapse postulate as inadequate. But he broke new ground by deriving a mathematically consistent theory of a universal wave function from the equations of quantum mechanics itself. The existence of multiple universes emerged as a consequence of his theory, not a predicate. In a footnote in his thesis, Everett wrote: “From the viewpoint of the theory, all elements of a superposition (all ‘branches’) are ‘actual,’ none any more ‘real’ than the rest.”
The draft containing all these ideas provoked a remarkable behind-the-scenes struggle, uncovered about five years ago in archival research by Olival Freire, Jr., a historian of science at the Federal University of Bahia in Brazil. In the spring of 1956 Everett’s academic adviser at Princeton, John Archibald Wheeler, took the draft dissertation to Copenhagen to convince the Royal Danish Academy of Sciences and Letters to publish it. He wrote to Everett that he had “three long and strong discussions about it” with Bohr and Petersen. Wheeler also shared his student’s work with several other physicists at Bohr’s Institute for Theoretical Physics, including Alexander W. Stern.
Wheeler’s letter to Everett reported: “Your beautiful wave function formalism of course remains unshaken; but all of us feel that the real issue is the words that are to be attached to the quantities of the formalism.” For one thing, Wheeler was troubled by Everett’s use of “splitting” humans and cannonballs as scientific metaphors. His letter revealed the Copenhagen-ists’ discomfort over the meaning of Everett’s work. Stern dismissed Everett’s theory as “theology,” and Wheeler himself was reluctant to challenge Bohr. In a long, politic letter to Stern, he explicated and excused Everett’s theory as an extension, not a refutation, of the prevailing interpretation of quantum mechanics:
I think I may say that this very fine and able and independently thinking young man has gradually come to accept the present approach to the measurement problem as correct and self-consistent, despite a few traces that remain in the present thesis draft of a past dubious attitude. So, to avoid any possible misunderstanding, let me say that Everett’s thesis is not meant to question the present approach to the measurement problem, but to accept it and generalize it. [Emphasis in original.]
Everett would have completely disagreed with Wheeler’s description of his opinion of the Copenhagen interpretation. For example, a year later, when responding to criticisms from Bryce S. DeWitt, editor of the journal Reviews of Modern Physics, he wrote:
The Copenhagen Interpretation is hopelessly incomplete because of its a priori reliance on classical physics ... as well as a philosophic monstrosity with a “reality” concept for the macroscopic world and denial of the same for the microcosm.
While Wheeler was off in Europe arguing his case, Everett was in danger of losing his student draft deferment. To avoid going to boot camp, he decided to take a research job at the Pentagon. He moved to the Washington, D.C., area and never came back to theoretical physics.
During the next year, however, he communicated long-distance with Wheeler as he reluctantly whittled down his thesis to a quarter of its original length. In April 1957 Everett’s thesis committee accepted the abridged version—without the “splits.” Three months later Reviews of Modern Physics published the shortened version, entitled “‘Relative State’ Formulation of Quantum Mechanics.” In the same issue, a companion paper by Wheeler lauded his student’s discovery.
When the paper appeared in print, it slipped into instant obscurity. Wheeler gradually distanced himself from association with Everett’s theory, but he kept in touch with the theorist, encouraging him, in vain, to do more work in quantum mechanics. In an interview last year, Wheeler, then 95, commented that “[Everett] was disappointed, perhaps bitter, at the nonreaction to his theory. How I wish that I had kept up the sessions with Everett. The questions that he brought up were important.”
Nuclear Military Strategies
Princeton awarded Everett his doctorate nearly a year after he had begun his first project for the Pentagon: calculating potential mortality rates from radioactive fallout in a nuclear war. He soon headed the mathematics division in the Pentagon’s nearly invisible but extremely influential Weapons Systems Evaluation Group (WSEG). Everett advised high-level officials in the Eisenhower and Kennedy administrations on the best methods for selecting hydrogen bomb targets and structuring the nuclear triad of bombers, submarines and missiles for optimal punch in a nuclear strike.
In 1960 he helped write WSEG No. 50, a catalytic report that remains classified to this day. According to Everett’s friend and WSEG colleague George E. Pugh, as well as historians, WSEG No. 50 rationalized and promoted military strategies that were operative for decades, including the concept of Mutually Assured Destruction. WSEG provided nuclear warfare policymakers with enough scary information about the global effects of radioactive fallout that many became convinced of the merit of waging a perpetual standoff—as opposed to, as some powerful people were advocating, launching preemptive first strikes on the Soviet Union, China and other communist countries.
One final chapter in the struggle over Everett’s theory also played out in this period. In the spring of 1959 Bohr granted Everett an interview in Copenhagen. They met several times during a six-week period but to little effect: Bohr did not shift his position, and Everett did not reenter quantum physics research. The excursion was not a complete failure, though. One afternoon, while drinking beer at the Hotel Østerport, Everett wrote out on hotel stationery an important refinement of the other mathematical tour de force for which he is renowned, the generalized Lagrange multiplier method, also known as the Everett algorithm. The method simplifies searches for optimum solutions to complex logistical problems—ranging from the deployment of nuclear weapons to just-in-time industrial production schedules to the routing of buses for maximizing the desegregation of school districts.
In 1964 Everett, Pugh and several other WSEG colleagues founded a private defense company, Lambda Corporation. Among other activities, it designed mathematical models of anti-ballistic missile systems and computerized nuclear war games that, according to Pugh, were used by the military for years. Everett became enamored of inventing applications for Bayes’ theorem, a mathematical method of correlating the probabilities of future events with past experience. In 1971 Everett built a prototype Bayesian machine, a computer program that learns from experience and simplifies decision making by deducing probable outcomes, much like the human faculty of common sense. Under contract to the Pentagon, Lambda used the Bayesian method to invent techniques for tracking trajectories of incoming ballistic missiles.
In 1973 Everett left Lambda and started a data-processing company, DBS, with Lambda colleague Donald Reisler. DBS researched weapons applications but specialized in analyzing the socioeconomic effects of government affirmative action programs. When they first met, Reis-ler recalls, Everett “sheepishly” asked whether he had ever read his 1957 paper. “I thought for an instant and replied, ‘Oh, my God, you are that Everett, the crazy one who wrote that insane paper,’” Reisler says. “I had read it in graduate school and chuckled, rejected it out of hand.” The two became close friends but agreed not to talk about multiple universes again.
Three-Martini Lunches
Despite all these successes, Everett’s life was blighted in many ways. He had a reputation for drinking, and friends say the problem seemed only to grow with time. According to Reisler, his partner usually enjoyed a three-martini lunch, sleeping it off in his office—although he still managed to be productive.
Yet his hedonism did not reflect a relaxed, playful attitude toward life. “He was not a sympathetic person,” Reisler says. “He brought a cold, brutal logic to the study of things. Civil-rights entitlements made no sense to him.”
John Y. Barry, a former colleague of Everett’s at WSEG, also questioned his ethics. In the mid-1970s Barry convinced his employers at J. P. Morgan to hire Everett to develop a Bayesian method of predicting movement in the stock market. By several accounts, Everett succeeded— and then refused to turn the product over to J. P. Morgan. “He used us,” Barry recalls. “[He was] a brilliant, innovative, slippery, untrustworthy, probably alcoholic individual.”
Everett was egocentric. “Hugh liked to espouse a form of extreme solipsism,” says Elaine Tsiang, a former employee at DBS. “Although he took pains to distance his [many-worlds] theory from any theory of mind or consciousness, obviously we all owed our existence relative to the world he had brought into being.”
And he barely knew his children, Elizabeth and Mark.
As Everett pursued his entrepreneurial career, the world of physics was starting to take a hard look at his once ignored theory. DeWitt swung around 180 degrees and became its most devoted champion. In 1967 he wrote an article presenting the Wheeler-DeWitt equation: a universal wave function that a theory of quantum gravity should satisfy. He credited Everett for having demonstrated the need for such an approach. DeWitt and his graduate student Neill Graham then edited a book of physics papers, The Many-Worlds Interpretation of Quantum Mechanics, which featured the unamputated version of Everett’s dissertation. The epigram “many worlds” stuck fast, popularized in the science-fiction magazine Analog in 1976.
Not everybody agrees, however, that the Copenhagen interpretation needs to give way. Cornell University physicist N. David Mermin maintains that the Everett interpretation treats the wave function as part of the objectively real world, whereas he sees it as merely a mathematical tool. “A wave function is a human construction,” Mer-min says. “Its purpose is to enable us to make sense of our macroscopic observations. My point of view is exactly the opposite of the many-worlds interpretation. Quantum mechanics is a device for enabling us to make our observations coherent, and to say that we are inside of quantum mechanics and that quantum mechanics must apply to our perceptions is inconsistent.”
But many working physicists say that Everett’s theory should be taken seriously.
“When I heard about Everett’s interpretation in the late 1970s,” says Stephen Shenker, a theoretical physicist at Stanford University, “I thought it was kind of crazy. Now most of the people I know that think about string theory and quantum cosmology think about something along an Everett-style interpretation. And because of recent developments in quantum computation, these questions are no longer academic.”
One of the pioneers of decoherence, Wojciech H. Zurek, a fellow at Los Alamos National Laboratory, comments that “Everett’s accomplishment was to insist that quantum theory should be universal, that there should not be a division of the universe into something which is a priori classical and something which is a priori quantum. He gave us all a ticket to use quantum theory the way we use it now to describe measurement as a whole.”
String theorist Juan Maldacena of the Institute for Advanced Study in Princeton, N.J., reflects a common attitude among his colleagues: “When I think about the Everett theory quantum mechanically, it is the most reasonable thing to believe. In everyday life, I do not believe it.”
In 1977 DeWitt and Wheeler invited Everett, who hated public speaking, to make a presentation on his interpretation at the University of Texas at Austin. He wore a rumpled black suit and chain-smoked throughout the seminar. David Deutsch, now at the University of Oxford and a founder of the field of quantum computation (itself inspired by Everett’s theory), was there. “Everett was before his time,” Deutsch says in summing up Everett’s contribution. “He represents the refusal to relinquish objective explanation. A great deal of harm was done to progress in both physics and philosophy by the abdication of the original purpose of those fields: to explain the world. We got irretrievably bogged down in formalisms, and things were regarded as progress which are not explanatory, and the vacuum was filled by mysticism and religion and every kind of rubbish. Everett is important because he stood out against it.”
After the Texas visit, Wheeler tried to hook Everett up with the Institute for Theoretical Physics in Santa Barbara, Calif. Everett reportedly was interested, but nothing came of the plan.
Totality of Experience
Everett died in bed on July 19, 1982. He was just 51. His son, Mark, then a teenager, remembers finding his father’s lifeless body that morning. Feeling the cold body, Mark realized he had no memory of ever touching his dad before. “I did not know how to feel about the fact that my father just died,” he told me. “I didn’t really have any relationship with him.”
Not long afterward, Mark moved to Los Angeles. He became a successful songwriter and the lead singer for a popular rock band, Eels. Many of his songs express the sadness he experienced as the son of a depressed, alcoholic, emotionally detached man. It was not until years after his father’s death that Mark learned of Everett’s career and accomplishments.
Mark’s sister, Elizabeth, made the first of many suicide attempts in June 1982, only a month before Everett died. Mark discovered her unconscious on the bathroom floor and got her to the hospital just in time. When he returned home later that night, he recalled, his father “looked up from his newspaper and said, ‘I didn’t know she was that sad.’” In 1996 Elizabeth killed herself with an overdose of sleeping pills, leaving a note in her purse saying she was going to join her father in another universe.
In a 2005 song, “Things the Grandchildren Should Know,” Mark wrote: “I never really understood/ what it must have been like for him/living inside his head.” His solipsistically inclined father would have understood that dilemma. “Once we have granted that any physical theory is essentially only a model for the world of experience,” Everett concluded in the unedited version of his dissertation, “we must renounce all hope of finding anything like the correct theory ... simply because the totality of experience is never accessible to us.”
Rights & Permissions
Share this Article:
Scientific American MIND iPad
Give a Gift & Get a Gift - Free!
Give a 1 year subscription as low as $14.99
Subscribe Now >>
Email this Article |
208517304ec2ab76 | quantum mechanics
quantum mechanics
quantum mechanics: see quantum theory.
Branch of mathematical physics that deals with atomic and subatomic systems. It is concerned with phenomena that are so small-scale that they cannot be described in classical terms, and it is formulated entirely in terms of statistical probabilities. Considered one of the great ideas of the 20th century, quantum mechanics was developed mainly by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, and Max Born and led to a drastic reappraisal of the concept of objective reality. It explained the structure of atoms, atomic nuclei (see nucleus), and molecules; the behaviour of subatomic particles; the nature of chemical bonds (see bonding); the properties of crystalline solids (see crystal); nuclear energy; and the forces that stabilize collapsed stars. It also led directly to the development of the laser, the electron microscope, and the transistor.
Learn more about quantum mechanics with a free trial on Britannica.com.
In quantum mechanics, the Hamiltonian H is the observable corresponding to the total energy of the system. As with all observables, the spectrum of the Hamiltonian is the set of possible outcomes when one measures the total energy of a system. Like any other self-adjoint operator, the spectrum of the Hamiltonian can be decomposed, via its spectral measures, into pure point, absolutely continuous, and singular parts. The pure point spectrum can be associated to eigenvectors, which in turn are the bound states of the system. The absolutely continuous spectrum corresponds to the free states. The singular spectrum, interestingly enough, comprises physically impossible outcomes. For example, consider the finite potential well, which admits bound states with discrete negative energies and free states with continuous positive energies.
Schrödinger equation
The Hamiltonian generates the time evolution of quantum states. If left| psi (t) rightrangle is the state of the system at time t, then
H left| psi (t) rightrangle = mathrm{i} hbar {partialoverpartial t} left| psi (t) rightrangle.
where hbar is the reduced Planck constant . This equation is known as the Schrödinger equation. (It takes the same form as the Hamilton-Jacobi equation, which is one of the reasons H is also called the Hamiltonian.) Given the state at some initial time (t = 0), we can integrate it to obtain the state at any subsequent time. In particular, if H is independent of time, then
left| psi (t) rightrangle = expleft(-{mathrm{i}Ht over hbar}right) left| psi (0) rightrangle.
Note: In introductory physics literature, the following is often taken as an assumption:
The eigenkets (eigenvectors) of H, denoted left| a rightrang (using Dirac bra-ket notation), provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted {Ea}, solving the equation:
H left| a rightrangle = E_a left| a rightrangle.
Since H is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumption. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.
Similarly, the exponential operator on the right hand side of the Schrödinger equation is sometimes defined by the power series. One might notice that taking polynomials of unbounded and not everywhere defined operators may not make mathematical sense, much less power series. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicist's formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
U = expleft(-{mathrm{i}Ht over hbar}right)
is an unitary operator. It is the time evolution operator, or propagator, of a closed quantum system. If the Hamiltonian is time-independent, {U(t)} form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
Energy eigenket degeneracy, symmetry, and conservation laws
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the x direction is a different state from one propagating in the y direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.
It turns out that degeneracy occurs whenever a nontrivial unitary operator U commutes with the Hamiltonian. To see this, suppose that |a> is an energy eigenket. Then U|a> is an energy eigenket with the same eigenvalue, since
UH |arangle = U E_a|arangle = E_a (U|arangle) = H ; (U|arangle).
Since U is nontrivial, at least one pair of |arang and U|arang must represent distinct states. Therefore, H has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let G be the Hermitian generator of U:
U = I - mathrm{i} epsilon G + O(epsilon^2)
It is straightforward to show that if U commutes with H, then so does G:
[H, G] = 0
frac{part}{part t} langlepsi(t)|G|psi(t)rangle = frac{1}{mathrm{i}hbar} langlepsi(t)|[G,H]|psi(t)rangle = 0
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
langlepsi (t)|H = - mathrm{i} hbar {partialoverpartial t} langlepsi(t)|.
Thus, the expected value of the observable G is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
Hamilton's equations
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states left{left| n rightrangleright}, which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
langle n' | n rangle = delta_{nn'}.
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time t, left| psileft(tright) rightrangle, can be expanded in terms of these basis states:
|psi (t)rangle = sum_{n} a_n(t) |nrangle
a_n(t) = langle n | psi(t) rangle.
The coefficients an(t) are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
langle H(t) rangle stackrel{mathrm{def}}{=} langlepsi(t)|H|psi(t)rangle
= sum_{nn'} a_{n'}^* a_n langle n'|H|n rangle
where the last step was obtained by expanding left| psileft(tright) rightrangle in terms of the basis states.
Each of the an(t)'s actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use an(t) and its complex conjugate an*(t). With this choice of independent variables, we can calculate the partial derivative
frac{partial langle H rangle}{partial a_{n'}^{*}}
= sum_{n} a_n langle n'|H|n rangle = langle n'|H|psirangle
By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to
= mathrm{i} hbar frac{partial a_{n'}}{partial t}
Similarly, one can show that
frac{partial langle H rangle}{partial a_n}
= - mathrm{i} hbar frac{partial a_{n}^{*}}{partial t}
If we define "conjugate momentum" variables πn by
pi_{n}(t) = mathrm{i} hbar a_n^*(t)
then the above equations become
frac{partial langle H rangle}{partial pi_{n}} = frac{partial a_{n}}{partial t} quad,quad frac{partial langle H rangle}{partial a_n} = - frac{partial pi_{n}}{partial t}
which is precisely the form of Hamilton's equations, with the a_ns as the generalized coordinates, the pi_ns as the conjugate momenta, and langle Hrangle taking the place of the classical Hamiltonian.
See also
Search another word or see quantum mechanicson Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
07fa14040c884051 | The Full Wiki
Atomic number: Quiz
Question 1: The atomic number, Z, should not be confused with the mass number, A, which is the total number of protons and ________ in the nucleus of an atom.
Atomic nucleusNeutronElectronNeutrino
Question 2: Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes), the ________ of an atom is roughly equal to A.
Atomic massMolar massChemistryOxygen
Question 3: Among other things, Moseley demonstrated that the lanthanide series (from ________ to lutetium inclusive) must have 15 members — no fewer and no more — which was far from obvious from the chemistry at that time.
Question 4: This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated ________ of the nucleus, i.e.
Magnetic fieldElectric chargeElectric currentElectromagnetism
Question 5: Atoms having the same atomic number Z but different neutron number N, and hence different atomic mass, are known as ________.
Stable nuclideActinoidTechnetiumIsotope
Question 6: In an atom of neutral charge, the atomic number is also equal to the number of ________.
Question 7: Most naturally occurring elements exist as a mixture of isotopes, and the average atomic mass of this mixture determines the element's ________.
BoronAtomic weightFluorineAvogadro constant
Question 8: In general, the ________ becomes shorter as atomic number increases, though an "island of stability" may exist for undiscovered isotopes with certain numbers of protons and neutrons.
Half-lifeCosmic rayRadioactive decayNuclear fission
Question 9: The configuration of these electrons follows from the principles of ________.
Quantum mechanicsWave–particle dualitySchrödinger equationIntroduction to quantum mechanics
Question 10: In chemistry and physics, the atomic number (also known as the proton number) is the number of protons found in the nucleus of an ________ and therefore identical to the charge number of the nucleus.
Got something to say? Make a comment.
Your name
Your email address |
ea6f57c0b6a03bd3 | Some of the most perplexing topics in physics revolve around quantum theory. The quandary is seen most famously in the Schrödinger’s cat question and the issue of information loss in black hole evaporation. Richard Feynman said, “I think that I can safely say that nobody understands quantum mechanics.” Most physicists have just gotten used to it. There’s no doubt quantum theory is successful at the practical level. But when considering it as more than a tool for calculating probabilities for possible outcomes of experiments in the laboratory, and taking it as a fundamental description of the “world out there,” it faces serious conceptual problems.
The basic problem is that quantum theory seems to be about what we measure and not about what is out there in the world. One might think this is just fine, as the theory represents just “our information” about the world. But that would make sense only if there were something about the world that we can be informed of, which must be, in general situations, specified by the theory. Understanding how to deal with this conceptual problem requires us to look at the theory in more detail.
BLACK HOLE PARADOX: If a black hole evaporates completely, leaving only the thermal radiation, there seems to be no way in which it might encode all the information needed to recreate the exact quantum state of the matter that gave rise to the black hole in the first place.NASA
According to quantum theory, a generic state of a system (a particle’s position or velocity) does not have well-defined values. That indefiniteness is known as “quantum uncertainty,” and, unfortunately also as “quantum fluctuation.” The quantum theory presented in standard textbooks involves two distinct rules for the evolution of the state of a physical system. One, referred to by Roger Penrose, is the U-process. It’s represented by the Schrödinger equation, allowing the precise determination of the state of the system at any future time (deterministic prediction), or time in the past (complete retrodiction), given the state of the system at present. But this rule only holds as long as the system is not subjected to an “observation.”
The second rule, which comes into play when some attribute of the system is observed or measured, is a stochastic rule, referred to by Penrose as the R-process. According to this rule, as a result of the measurement, the state jumps into one of the states where the attribute in question has a well-defined value. This rule does not allow, in general, a precise prediction of which state that would be, nor the retrodiction of the state previous to the measurement or observation. One can use it to accurately predict probabilities, and predict the average value that would emerge from a large number of repetitions of the experiment, as well as the statistical dispersion of the results, a quantity that coincides, in numerical value, with the level of indefiniteness mentioned above.
The nature of black hole singularity has given rise to wild speculations.
One of the problems is that quantum theory is obscure (to say the least) regarding what it claims about the nature of the world when no one is looking. Is the involvement of a consciousness required for the theory to make sense, and if so, does that include a mouse’s or a fly’s? In particular, the specification of what constitutes a measurement is irreparably vague. Perhaps all that’s needed is a large enough apparatus. But what’s large enough? And what happens at the boundary? These issues are referred to as the measurement problem. Such conceptual difficulties are usually ignored by practicing physicists.
One exception is provided by David Bohm, who rediscovered a proposal (originally considered by Louis de Broglie) giving a different characterization of the theory, with point-like particles that at all times have definite positions and velocities, while the quantum state simply guides them in their time evolution (and a cat is never simultaneously dead and alive). Another notable exception is exemplified by the proponents of modifications of the theory that would unify the U and R processes into a single law, removing the need to introduce the notion of “measurement” at the fundamental level. In that case, Schrödinger’s misfortunate pet would be either dead or alive, even if no one is looking.
This approach has formed the basis of “spontaneous collapse” theories,1 which are characterized by invoking something akin to a collection of miniature versions of the R process occurring spontaneously to all particles throughout space and time; that is, without the need for a measurement to take place. Further out on the frontier is the many-worlds theory (pioneered by Hugh Everett), in which every measurement is tied to a bifurcation (or multifurcation) of reality into something like parallel coexisting worlds.
A careful analysis shows that these are essentially the three logical avenues that might be taken to deal with the issue2: Modify the theory by adding something beyond the quantum state (the hidden variable rout exemplified by the de-Brogile-Bohm approach), modify rules for evolution in the theory by having measurement-like events occur all the time (as in the spontaneous collapse theories), or remove the R process altogether (which takes us down the many-worlds path).
Many quantum physicists are convinced the issue at large, or the approach one might take in its regard, are of no relevance to the challenges in their fields. I, among a small group of colleagues, hold a dramatically different view, and maintain that spontaneous collapse is the most promising route to address some of the most serious difficulties faced by our current understanding of the laws of the universe, and in particular those situations where gravitation and quantum theory must be used together.
A central feature of cosmology, as we commonly understand it, is an epoch known as inflation, thought to have taken place in the first fractions of a second after the Planck epoch, itself a mysterious regime. In the Planck epoch, quantum gravity should rule, and the notions of spacetime itself would probably cease to be relevant or useful. (Quantum gravity refers to a theory that would harmoniously combine the basic principles of general relativity, our best theory of gravitation, and quantum theory.) In this inflationary regime, the usual concepts of spacetime are supposed to be adequate. But in addition, gravitation is thought to be well described by general relativity, and matter explained by the same type of theories we use in ordinary particle physics situations (such as those explored empirically in places like CERN or in the studies of high energy cosmic rays).
The main difference is that the type of matter thought to be dominant in the inflationary epoch is a field known as the inflaton. This is a little bit like the electromagnetic field, but far simpler, because it lacks intrinsic directionality or spin. The main feature of that epoch is that the universe expands, as a result of the gravitational effects of the inflaton field, in an extremely fast and accelerated manner (by a total expansion factor of at least a million trillion trillion times; i.e., a factor of 1030). As a result, the spatial curvature of the universe is driven to zero, and all deviations from perfect homogeneity and isotropy are completely diluted (the remaining deviations of the order of 10-90, so small, that for simplicity, I will take it as zero).
The inflationary epoch ends when the inflaton field decays, filling the universe with all the matter we find in it today: the usual matter from which you, the chair you are sitting on, and the solar system are made of; the slightly more exotic type of matter we are able to produce for fractions of a second with powerful particle accelerators like CERN’s; and even the elusive dark matter that seems to constitute the overwhelming part of galaxies and galactic clusters. In other words, the end of the inflationary epoch is supposed to lead into the regime described by the older, traditional, and empirically successful Big Bang cosmology, describing an expanding universe filled with extremely hot plasma composed of all the variety of particles with their respective abundances basically ruled by thermodynamic considerations. A universe that cooled down as it expanded, leading to the formation of light nuclei (when the temperature dropped to about a billion degrees Kelvin), and much later, to the formation of the first atoms (at about 3,000 degrees Kelvin). This latter stage is the one in which the photons corresponding to cosmic microwave background radiation are emitted.
In the small variations of temperature patterns of the cosmic microwave background radiation, we can see the imprint of the primordial deviations from homogeneity and isotropy that would continue to grow to the present to make up the galaxies, stars, and planets that populate our current universe. The point is that the universe is not, and has not been, homogeneous and isotropic for quite some time. On the other hand, according to inflation, the universe’s violent expansion completely diluted all inhomogeneities (differences in conditions between different places) and anisotropies (differences among different directions). That situation is described in terms of a spacetime and inflaton field in states that are completely homogeneous and isotropic.
Where do the inhomogeneities that led to the formation of all cosmic structure, and whose imprint we see in the cosmic microwave background, come from? According to the current cosmological orthodoxy, they arose out of “quantum fluctuations” of the inflaton and the spacetime metric during the inflationary epoch. In fact, inflation comes together with a recipe for the quantum state of fields in the inflationary epoch, a so-called Bunch-Davies vacuum. That state, just as the vacuum state in flat spacetime, has the property of being 100 percent homogeneous and isotropic, and yet somehow we ought to regard that in its quantum uncertainties lie the seeds of present day cosmic inhomogeneities.
Most cosmologists see no problem here because they readily interchange “quantum indefiniteness” and “statistical dispersions” (a conceptual error often obscured by the fact that the word fluctuation is used in both contexts). But that view is only justified if there were a measurement involved. The point is that a measurement might indeed change the state of the system, according to the R-process, leading to a state that is no longer homogeneous and isotropic as the initial one.
But what could count as a measurement in the early universe, well before galaxies, planets, and conscious beings ever formed? Some cosmologists would answer that, today, with our satellites, we are making the required measurements. A moment’s reflection shows how problematic such posture is: We, with our measuring devices, are responsible for the breakdown of the perfect homogeneity prevailing in the early universe, a change that led to the formation of cosmic structure, including galaxies, stars, and planets, which in turn was necessary for the emergence of the conditions where life and (self-proclaimed “intelligent”) creatures like ourselves would be possible! We would be, in part, the cause of our own existence! I can’t help but to be reminded of the old country song, “I’m my own grandpa.”
After considering existing paths to address the grandpa problem, Alejandro Perez, Hanno Sahlmman, and I proposed3 adding a new ingredient to the mix: the spontaneous collapse of the quantum state of the inflaton field. This is a version of the R-process, taking place constantly, which in general induces small and random changes in the quantum state of the field. The randomness of such a process would be able to account for the breakdown of homogeneity and isotropy in the early universe, without having to invoke any observer or measuring device. Moreover, if spontaneous collapse satisfied some simple requirements, the resulting predictions regarding these inhomogeneities could reproduce the characteristics of the distribution of the temperature variations that are seen in the cosmic microwave background.4
Quantum theory is obscure (to say the least) about the nature of the world.
At first, the new approach did not seem to lead to any important departures from the standard predictions. But there’s at least one aspect of the story in which the predictions differ dramatically. It turns out that, according to the standard treatment, the predictions for the generation of inhomogeneities in the density of matter in the universe come inseparably attached to similar predictions for the generation of the so-called primordial gravity waves. These would be similar to those gravity waves that have been so spectacularly detected arising from the collisions of black holes and/or neutron stars by the detectors LIGO and VIRGO. But unlike those, the primordial ones would be so feeble now, that their presence is only expected to be detectable in a certain type of anisotropies in the polarization of the cosmic microwave background radiation.
The search for those has been intense, as they are taken as the main possible confirmation of the correctness of inflation. The fact that they have not, so far, been detected is considered as a serious problem in inflationary cosmology, with the simplest and most attractive models already ruled out by the failure of their expected detection. When following our approach,5 the predictions regarding the generation of primordial gravity waves are so dramatically reduced that they would be undetectable by current methods and detector sensitivities. The calculations show they might be detectable only with substantially improved sensitivities and with a change of focus from the very small to the very large angular scales in the sky (two unfortunately rather difficult things to do). Thus, quite unexpectedly, and as a result of the type of conceptual considerations we set out to face, a concrete prediction for inflationary cosmology was dramatically changed, with the novel one in better accordance with the existing empirical evidence.
The conceptual difficulties of quantum theory are also tied to the topic of black holes. The theory of general relativity predicts that once black holes are formed, a singularity—a region where geometrical quantities would nominally acquire the value infinity—will develop in their interior, with the curvature diverging as that region is approached. The nature of such singularity has given rise to wild speculations, including the notion that they represent the emergence of even more exotic objects, or even portals to other universes. But what they really indicate is the presence of a regime where the theory of general relativity fails to apply. (Not too exciting, sorry folks!)
That is, if we want to rely on the theory of general relativity, we must do so only up to some boundary that excludes the region where those singularities are supposed to appear.
Physicists are generally convinced that our current theory ought to be superseded by a deeper one, which encapsulates both general relativity and quantum mechanics, joined in a smooth and self-consistent manner: a theory of quantum gravity. Such quantum gravity is expected to “cure” those singularities, and remove the need to include a boundary in the discussions involving black holes. The least speculative notions do not involve anything like portals to other universes or wildly exotic objects appearing in their stead.
One feature about them, first noted by physicist Jacob Beckenstein, that is taken as a fundamental clue, is that their energy exchanges with the exterior are ruled by laws that seem identical to those of the thermodynamics. In particular, and as shown by Stephen Hawking, they lose energy via the emission of thermal radiation, and have an entropy given by the Boltzmann constant (ubiquitous in all thermodynamics) for each “tile” of Planck’s length side, needed to cover the area of the black hole. This has attracted intense interest in the last decades, as physicists started considering a variety of approaches toward the construction of a theory of quantum gravity. Surely, such theory should be able to account for the expression for the black hole entropy. And readily enough, in a relative short time, and within slightly different, but always rather limited contexts, proponents of quantum gravity found accounts that came up with the right answer.
An attractive solution to the measurement problem is provided by spontaneous collapse.
But the fact that this analysis, starting from Hawking’s discovery, involves quantum theory, has raised another question that continues to baffle physicists. It’s the focus of intense debates and disagreements and goes by the name of the Black Hole information “paradox.”
The usual account goes like this: According to quantum theory, the quantum state of an isolated physical system provides a complete description of such a system. That state evolves according to an evolutionary law that allows for the exact prediction of the corresponding state at any other time in the future, or the retrodiction of the state of the system in the past. On the other hand, a black hole of a certain mass and angular momentum could have been formed in a large number of ways. If the black hole evaporates completely, leaving only the thermal radiation, which is fully characterized in very simple ways, there seems to be no way in which it might encode all the information needed to retrodict, with precision, the exact quantum state of the matter that gave rise to the black hole in the first place. Thus, from the details of the final state, it would be impossible to retrodict the detailed state from which the black hole was initially formed, in conflict with what is expected, given the characteristics of the evolution laws of quantum theory. This, to many people, indicates that we face a “paradox.”
A closer look into the problem reveals that things are not so straightforward (and explains why I put the word paradox in quotation marks). The point is that the claim that according to quantum theory we should be able to retrodict the detailed state from which the black hole was initially formed is just false. Such a conclusion would only follow if one just focuses on the U-process and completely ignores the R process. Thus, it is natural to tie the consideration of the issues arising in connection with black hole evaporation and the fate of information with the resolutions of the measurement problem.6
One of the most attractive solutions to the measurement problem is provided by spontaneous collapse. Starting in 2015, my colleagues and I have considered7 and analyzed in detail, with the help of simplified models,8 whether the use of such theories, in the context of the black hole evaporation, could fully address the issue. Our analysis so far indicates that the answer is yes, provided the spontaneous collapse rate increases with the curvature of spacetime. If that is the case, then the small level of information erasure that is normally associated with the spontaneous collapse becomes efficient enough, due to the increasing curvature in the deep interior of the black hole, to account for all the information that seems to be erased when it evaporates completely.
The work must continue to sort out the open issues and details of the exact form of the theory, and to find other situations where these ideas could be put to test. Although things are not yet settled, the possibility exists that a collective resolution of problems as diverse as that of Schrödinger’s cat, the black hole information issue, and the puzzling aspects of inflationary cosmology, might result from the consideration of spontaneous collapse. We have recently found other issues where this approach might be of help, including the possibility to account for the very low entropy of the initial state of the universe,9 and a path to understand the nature and magnitude of the dark energy.10 The use of spontaneous collapse theories in situations involving gravitation seems to be a very promising and exciting research path indeed.
Daniel Sudarsky has held visiting positions at the University of Chicago, Penn State University, the University of Buenos Aires in Argentina, the University of Marseille in France, and NYU. He is currently a member of the Board of Directors of the John Bell Institute for the Foundations of Physics and professor at the Institute for Nuclear Sciences of the National Autonomous University of México.
1. Ghirardi, G.C., Rimini, A., & Weber, T. Unified dynamics for microscopic and macroscopic systems. Physical Review D 34, 470-491 (1986); Pearle, P. Combining stochastic dynamical state-vector reduction with spontaneous localization. Physical Review A 39, 2277-2289 (1989); for a relatively recent review see Bassi, A. & Ghirardi, G. Dynamical reduction models. Physics Reports 379, 257-426 (2003).
2. Maudlin, T. Three measurement problems. Topoi 14, 7-15 (1995).
3. A. Perez, A., Sahlmman, H., & Sudarsky, D. On the quantum mechanical origin of the seeds of cosmic structure. Classical and Quantum Gravity 23, 2317-2354 (2006).
4. Planck Collaboration (Akrami, Y., et al.) Planck 2018 results. X. Constraints on inflation. arXiv:1807.06211 (2019).
5. León, G., Kraiselburd, L., & Landau, S.J. Primordial gravitational waves and the collapse of the wave function. Physical Review D 92, 083516 (2015); Majhi, A. Okón, E., & Sudarsky, D. Reassessing the link between B-modes and inflation. Physical Review D 96, 101301 (2017); León, G., Majhi, A., Okón, E., & Sudarsky, D. Expectation of primordial gravity waves generated during inflation. Physical Review D 98, 023512 (2018).
6. In this, we have been strongly influenced by considerations in this regard made over 3 decades ago by works such as Penrose, R. Time asymmetry and quantum gravity. In Isham, C.J., Penrose, R., & Sciama, D.W. (Eds.) Quantum Gravity II (1981); Wald, R.M. Quantum gravity and time reversibility. Physical Review D 21, 2742 (1980).
7. Okón, E. & Sudarsky, D. “The black hole information paradox and the collapse of the wave function. Foundations of Physics 45, 461-470 (2015); Okón, E. & Sudarsky, D. Losing stuff down a black hole. Foundations of Physics 48, 411 (2018).
8. Modak, S., Ortiz, L., Peña, I. & Sudarsky, D. Non-paradoxical loss of information in of black hole evaporation in collapse theories. Physical Review D 91, 124009 (2015); Bedingham, D., Modak, S.K., & Sudarsky, D. Relativistic collapse dynamics and black hole information loss. Physical Review D 94, 045009 (2016).
9. Okón, E. & Sudarsky, D. A (not so?) novel explanation for the very special initial state of the universe. Classical & Quantum Gravity 33 (2016).
10. Josset, T., Perez, A. & Sudarsky, D. Dark energy as the weight of violating energy conservation. Physical Review Letters 118, 021102 (2017); Perez, A. & Sudarsky, D. Dark energy from quantum gravity discreteness. Physical Review Letters 122, 221302 (2019).
Lead image: local_doctor / Shutterstock |
a6db3ad0946009b6 | Top 30+ Famous Atheist Scientists That You Should Know
Top 30+ Famous Atheist Scientists That You Should Know
To celebrate scientists and scientific advancements, we have collected a list of the most famous atheist scientists that will inspire us for the greater good.
What do you get when you combine a love of science with an utter disdain for organized religion? An atheist scientist, of course.
Many of the most famous scientists of all time were atheists. And their belief in science was so strong that it inspired them to make incredible discoveries about the world around us.
Here are some of the most notable atheist scientists who have made significant contributions to the world of science.
Table of Contents
Famous Atheist Scientists
#39. Christian Bohr (1855-1911): Top Medical Scientist Who Didn’t Believe That God Exists
Credits: Wikipedia; Famous Atheist Scientists
Credits: Wikipedia
What makes Christian Bohr famous?
Christian Bohr was a Danish physician who is best known for describing dead space in physiology and for his work on the Bohr effect. He was also the father of Nobel Laureate Niels Bohr.
Bohr’s first major contribution to science was his description of dead space, which he described as “the amount of air that remains trapped in the lungs after a person has exhaled.”
This discovery led to a greater understanding of how much oxygen people can take in during a single breath and how much carbon dioxide they can exhale during the same period. It also led to improved methods of measuring gas exchange between blood and alveoli, which is vital for understanding lung function.
[Source: Wikipedia]
#38. Sydney Brenner (1927-2019): The Nobel Laureate Who Contributed To Our Understanding of Genes
Credits: Science ; Famous Atheist Scientists
Credits: Science
What makes Sydney Brenner famous?
If you’re a fan of science, you might be familiar with Sydney Brenner. He wrote more than 1,000 publications and is known for his research in developmental biology, which he began at Berkeley’s Molecular Sciences Institute.
Born in South Africa, Brenner first became interested in genetics while a graduate student at the University of Oxford. He was working on the genetic code when he realized that there were gaps in knowledge about how developmental processes occur.
In particular, he wanted to know why plants and animals have roughly the same number of genes. This led him to create a model organism (a roundworm species) that scientists could use to study how these processes work.
This roundworm has been extensively used by other researchers since then. His model organism for development helped spur new research in this area of science.
Additionally, Brenner was jointly awarded the Nobel Prize in Medicine or Physiology in 2002 for his contributions to understanding how genes function during development.
What’s the best Sydney Brenner quote?
[Source: The Nobel Prize ]
#37. Subrahmanyan Chandrasekhar (1910-1995): Indian Nobel Laureate
Credits: The Famous People ; Famous Atheist Scientists
Credits: The Famous People
What makes Subrahmanyan Chandrasekhar famous?
Subrahmanyan Chandrasekhar was a brilliant astrophysicist who studied stars and their structure, including how they evolve. His work earned him the Nobel Prize in Physics in 1983. Chandrasekhar’s work contributed to our understanding of black holes and white dwarfs, among other things.
One of his most famous discoveries is that white dwarf stars have an upper limit on how large they can become. This limit is named after him—the Chandrasekhar limit.
What’s the best Subrahmanyan Chandrasekhar quote?
[Source: Encyclopedia Britannica]
#36. Boris Chertok (1912-2011): The Scientist Who Was Influential in the Soviet Union’s Space Program
Credits: Universe Today; Famous Atheist Scientists
Credits: Universe Today
What makes Boris Chertok famous?
Do you want to read about the Soviet space program? Rockets and People is your best book on this exciting topic.
Let’s take a look at the person who wrote this exciting read!
Boris Chertok was a Russian engineer best known for his contribution to developing advanced control systems for missiles and rockets.
At the end of his studies, Chertok earned a degree in electrical engineering and became interested in aviation technology.
In 1946, Chertok joined the Red Army Air Force as an engineer and was stationed at a base near Moscow. Here, he met Sergei Korolev—the man who would become famous for being one of Russia’s leading rocket scientists.
Korolev, Chertok, and several other engineers from their unit helped develop various aspects of rocketry, including guidance systems for missiles.
[Source: NASA]
#35. George Gamow (1904-1968): The Scientist Who Explained Alpha Decay
Credits: The Famous People; Famous Atheist Scientists
Credits: The Famous People
What makes George Gamow famous?
George Gamow was a Russian cosmologist, theoretical physicist, and author. His work on the big bang theory earned him a place in the history books as one of the most influential physicists of the 20th century. He is best known for his contributions to nuclear physics and cosmology.
Gamow explained alpha decay using concepts from theoretical physics. His contributions to nuclear physics include developing the liquid drop model, which is useful in studying atomic nuclei.
[Source: Encyclopedia Britannica ]
#34. William Kingdon Clifford (1845-1879): The Person Who Introduced Geometric Algebra To Math
Credits: Britannica; Famous Atheist Scientists
Credits: Britannica
What makes William Kingdon Clifford famous?
William Kingdon Clifford was an English mathematician. He is best known for his contributions to science.
Clifford was born in 1845 and died in 1879 at the age of 34, but he managed to make a huge impact on the world of math during his short lifetime. One of his most influential works introduced geometric algebra to the world of mathematics.
This concept has become an important part of physics, engineering, and computer science.
In addition to his contributions to geometry and algebra, Clifford also made major contributions to physics. He argued that gravitation results from geometry.
[Source: Encyclopedia Britannica]
#33. Samuel T. Cohen (1921-2010): The Father of The Neutron Bomb
Credits: Daily Mail; Famous Atheist Scientists
Credits: Daily Mail
What makes Samuel T. Cohen famous?
Samuel T. Cohen was an American physicist who is best known for his contribution to the neutron bomb, a controversial weapon that emits deadly radiation.
Cohen was born on January 25, 1921, in New York City. He studied at the University of California and then went on to work for the US army.
In 1944, Cohen joined Los Alamos National Laboratory as a research scientist. During his time there, he made significant contributions to the development of nuclear weapons technology.
[Source: Wikipedia]
#32. Richard Dawkins (1941-Present): The Outspoken Atheist Whose Work Revolutionized Evolutionary Biology
Credits: The Guardian, Famous Atheist Scientists
Credits: The Guardian
What makes Richard Dawkins famous?
Richard Dawkins was an English evolutionary biologist best known for developing the gene-centric approach to evolution. He popularized this concept through his book, The Self Gene
His text, The Extended Phenotype, explores how natural selection works at different levels. He has also written several books for general audiences: The Blind Watchmaker, The God Delusion, and A Devil’s Chaplain.
As part of the New Atheist Movement, Dawkins doesn’t believe God created the world. He often criticizes religious beliefs as being “anti-science.”
His outspokenness on this issue has made him controversial among many religious people who believe he is attacking their beliefs without good reason or evidence to back up his claims.
[Source: Encyclopedia Britannica]
#31. James Chadwick (1891-1974): The Father of Modern Physics Who Lacked Religious Faith
Credits: WIRED; Famous Atheist Scientists
Credits: WIRED
What makes James Chadwick famous?
James Chadwick was a British physicist and one of the most influential scientists ever to come out of Britain. His contributions to physics are still being recognized today by students, teachers, and researchers in all fields of science.
For example, his work on the Military Application of Uranium Detonation (MAUD) Report influenced the United States government’s decision to pursue atomic weapons during World War II.
He discovered the neutron, which revolutionized atomic physics. He even earned the Nobel Prize for this achievement.
[Source: The Nobel Prize]
#30. Francis Crick (1916-2004): The Co-Discoverer of DNA Structure
Credits: Biography; Famous Atheist Scientists
Credits: Biography
What makes Francis Crick famous?
Francis Crick (molecular biologist) is known as one of the individuals who discovered the exact structure of DNA, along with James Dewey Watson.
While employed at the University of Cambridge, the pair became friends. In 1962, they were the recipients of the Nobel Prize for their contributions to science.
Despite his beliefs about the universe’s origin, Francis Crick remains one of the world’s best scientists of all time.
What’s the best Francis Crick quote?
“Chance is the only source of true novelty.”
[Source: Encyclopedia Britannica]
#29. Meghnad Saha (1893-1956): Father of Modern Astrophysics
Credits: Oneindia; Famous Atheist Scientists
Credits: Oneindia
What makes Meghnad Saha famous?
Meghnad Saha was an astrophysicist who made some enormous contributions to astronomy. He came up with an equation allowing astronomers to estimate stars’ characteristics.
This Saha ionization equation allows astronomers to classify spectral stars according to their chemical and physical conditions. His work helps us understand stars and how they function.
[Source: Encyclopedia Britannica]]
#28. Paul Dirac (1902-1984): The Brain Behind The Dirac Equation
Credits: BBC; Famous Atheist Scientists
Credits: BBC
What makes Paul Dirac famous?
Paul Dirac had a productive life. He was an English theoretical physicist and professor who made important contributions to quantum mechanics and quantum electrodynamics.
His best-known work was on the Dirac equation, which states that any particle with spin can be described as a two-component mixture of complex spinors. This equation argues for the existence of antimatter.
Dirac also pioneered the fields of quantum electrodynamics and mechanics. In 1933, he shared the Nobel Prize in Physics with Erwin Schrödinger for their work on atomic theory.
[Source: The Nobel Prize]
#27. Lev Landau (1908-1968): The Nobel Prize-Winning Theoretical Physicist
Credits: Evening Standard; Famous Atheist Scientists
Credits: Evening Standard
What makes Lev Landau famous?
Lev Landau was born in the Russian Empire and rose to become one of the most important figures in history.
Landau is best known for his contributions to theoretical physics, particularly quantum mechanics. He co-discovered the density matrix that is essential in quantum mechanics. He also developed the theory of diamagnetism.
But it was his concept of superfluidity that earned him global recognition. It helped scientists understand liquid helium II. Lev Landau received the 1962 Nobel Prize in Physics for his work on this theory.
What’s the best Lev Landau quote?
“Everybody has a capacity for a happy life. All these talks about how difficult times we live in, that’s just a clever way to justify fear and laziness.”
[Source: The Nobel Prize]
#26. Andrei Sakharov (1921-1989): The Father of the Soviet Union’s Hydrogen Bomb
Credits: Atomic Archive; Famous Atheist Scientists
Credits: Atomic Archive
What makes Andrei Aleksandrovich Sakharov famous?
Andrei Aleksandrovich Sakharov was a Soviet nuclear physicist and activist who became a dissident against the Soviet Union’s government.
He was a brilliant scientist who helped contribute to Russian research into thermonuclear weapons by designing Soviet nuclear weapons, particularly the hydrogen bomb, RDS-37. While he worked towards developing weapons, Sakharov did not participate in World War II. Instead, he served in a factory during this time.
Sakharov would call for peace with his efforts to push for disarmament, earning him the 1975 Nobel Peace Prize.
What’s the best Andrei Sakharov quote?
“A country which does not respect the rights of its own citizens will not respect the rights of its neighbors.”
[Source: The Nobel Prize]
#25. Joseph-Louis Gay-Lussac (1778-1850): One of The Fathers of Modern Chemistry
Credits: The Famous People ; Famous Atheist Scientists
Credits: The Famous People
What makes Joseph-Louis Gay-Lussac famous?
Joseph-Louis Gay-Lussac was a French chemist who made many contributions to the field of chemistry. He is most famous for discovering the composition of water—one part oxygen and two parts hydrogen.
Gay-Lussac’s name became synonymous with gas laws when he published his law in 1809. In this law, he stated that the volume of an ideal gas is proportional to its pressure and temperature. This law has been used ever since as an important foundation for understanding gases.
[Source: Science History Institute]
#24. J.B.S. Haldane (1892-1964): One of The Fathers of Medical Genetics
Credits: John Innes Centre
What makes J.B.S. Haldane famous?
John Burdon Sanderson Haldane was born in Oxford, England, on November 5, 1892, to a Scottish father and mother.
He began his career in Britain before moving to India to work for the government. Haldane is known for developing gene maps for color blindness and hemophilia on the X chromosome.
Besides pioneering in vitro fertilization, he was also one of the first people to suggest that sickle cell disease is responsible for some level of immunity against malaria.
Haldane also described gene linkage in mammals. He proposed that certain traits are inherited together. This is because they are expressed by genes located close together on chromosomes.
Haldane’s revolutionary work in genetics led him down a path toward a better understanding of evolution itself. He saw this quest as necessary to understand how humans fit into the world around them.
His research was also integral in developing new medical treatments that helped millions worldwide who today suffer from hereditary diseases like hemophilia or blood cancer.
What’s the best J.B.S. Haldane quote?
“This is my prediction for the future: whatever hasn’t happened will happen, and no one will be safe from it.”
[Source: Encyclopedia Britannica]
#23. John Maynard Smith (1920-2004): The British Engineer Who Pioneered Population Genetics
Credits: Alchetron; Famous Atheist Scientists
Credits: Alchetron
What makes John Maynard Smith famous?
John Maynard Smith was a British geneticist and evolutionary biologist born on January 6, 1920. He studied aeronautical engineering and served in World War II.
After the war, he returned to school and earned a degree in biology under J.B.S. Haldane.
Maynard Smith is best known for his contributions to many concepts in genetics and evolutionary biology, such as signaling theory and the evolution of sex. He worked alongside other prominent figures in genetics, including George Robert Price (who developed the Price equation).
[Source: National Library of Medicine]
#22. Alan Hale (1958-Present): The Co-Discoverer of the Hale-Bopp Comet
Credits: Wikipedia; Famous Atheist Scientists
Credits: Wikipedia
What makes Alan Hale famous?
Alan Hale is a Japanese-born American astronomer who is best known for his contributions to the field.
Hale was born in Tokyo in 1958 and moved to the United States as a child with his family. He attended New York State University as a graduate student.
Hale focused on studying stars, planetary systems, comets, and asteroids. His most notable discovery was the Hale-Bopp Comet, which appeared in 1995.
That night, another young astronomer also discovered the comet in the company of his friends. Thomas Bopp became the co-discoverer of the comet, which has disappeared.
[Source: Encyclopedia Britannica]
#21. Rita Levi-Montalcini (1909-2012): The Person Who Discovered The Nerve Growth Factor
Credits: Jewish Women's Archive; Famous Atheist Scientists
Credits: Jewish Women’s Archive
What makes Rita Levi-Montalcini famous?
Rita Levi-Montalcini was a Nobel prize-winning Italian neuroscientist. She was born in Turin, Italy, to a Jewish family.
Although she received her medical degree from the University of Turin, Benito Mussolini’s Manifesto of Race cut short her academic career.
During World War II, Levi-Montalcini studied nerve cells in a private lab in her bedroom. Her most famous research came about when she discovered nerve growth factor (NGF) in chicken embryos.
For her achievements, Rita Levi-Montalcini became a Senator for Life in 2001 when Carlo Azeglio Ciampi, the then president of Italy, appointed her to the senate.
In 1986, she won the Nobel Prize in Physiology or Medicine with Stanley Cohen for their contributions to the discovery of NGF ( nerve growth factor).
What’s the best Rita Levi-Montalcini quote?
[Sources: The Nobel Prize, Annual Reviews with Rita Levi-Montalcini]
#20. Joseph Louis Lagrange (1736-1813): The Man Who Contributed To The Study of Celestial Mechanics
Credits: The Famous People; Famous Atheist Scientists
Credits: The Famous People
What makes Joseph Louis Lagrange famous?
The best way to understand Joseph Louis Lagrange is to know what he did.
Lagrange was an Italian-born astronomer and mathematician who made major contributions to celestial mechanics. This is the branch of astronomy that deals with the motions of celestial bodies.
He’s best known for providing the most detailed information on celestial mechanics after Isaac Newton’s work.
Lagrange focused on solving problems related to celestial mechanics. And he made a name for himself in this field.
[Source: Encyclopedia Britannica]
#19. Abraham Maslow (1908-1970): The Person Behind Maslow’s Hierarchy of Needs
Credits: Verywell Mind; Famous Atheist Scientists
Credits: Verywell Mind
What makes Abraham Maslow famous?
Abraham Maslow was an American psychologist best known for creating Maslow’s hierarchy of needs.
The “hierarchy of needs,” as it’s called, is a pyramid that illustrates the way humans develop their needs to achieve self-actualization.
It begins with physiological needs like food, water, and shelter at the bottom and moves up through safety needs like security and stability. Next, comes love and belongingness needs. Then esteem needs, and finally, self-actualization at the top.
Maslow’s theory is that people must fulfill their lower-level needs before moving on to higher ones. If you don’t have food or water, you aren’t going to be worried about anything else until you fulfill those needs.
Once you have taken care of those lower levels, you can focus on higher-level needs—like security or love—that will help them feel more fulfilled in life.
[Source: Encyclopedia Britannica]
#18. John Forbes Nash Jr. (1928-2015): The Mathematician & A Nobel Laureate
Credits: The New York Times; Famous Atheist Scientists
Credits: The New York Times
What makes John Forbes Nash Jr. famous?
John Forbes Nash Jr. was born in Bluefield, West Virginia, on May 22, 1928. He studied at the Carnegie Institute of Technology before going to Princeton University to continue his studies in mathematics.
Nash was awarded the Nobel Prize in Economics in 1994 for his contributions to game theory. He is best known for his Nash equilibrium theory, which he first formulated in 1950. The idea has been widely used in economics, game theory, and other fields such as finance and philosophy.
He was also involved in other areas of mathematics, such as partial differential equations, number theory, and graph theory.
His story is the basis for the movie: the Beautiful Mind.
What’s the best John Forbes Nash Jr. quote?
[Source: The Nobel Prize]
#17. Thomas Hunt Morgan (1866-1945): The Nobel Laureate & An Expert in Heredity
Credits: The Famous People; Famous Atheist Scientists
Credits: The Famous People
What makes Thomas Hunt Morgan famous?
Thomas Hunt Morgan was an American geneticist and evolutionary biologist best known for his work on the fruit fly. He laid the foundation of modern genetics.
Morgan was born in Kentucky, USA, on September 25, 1866, and studied at Johns Hopkins University, earning his P.h.D. His research focused on understanding the mechanisms of heredity as well as how genes are passed from one generation to another.
This finding led him to discover that chromosomes contain genes and that each chromosome carries only one kind of gene pair (a dominant and recessive gene).
Morgan also found that genes are arranged linearly along chromosomes and can be inherited together or independently. In 1933, Morgan received the Nobel Prize in Physiology or Medicine for his work on chromosomes and their function in heredity.
[Source: Encyclopedia Britannica]
#16. Gerhard Armauer Hansen (1841-1912): The Discoverer of The Bacteria That Causes Leprosy
Credits: Science Direct; Famous Atheist Scientists
Credits: Science Direct
What makes Gerhard Armauer Hansen famous?
Gerhard Armauer Hansen was a Norwegian physician who, in 1873, discovered the bacterium that causes leprosy. The bacteria was named Mycobacterium leprae.
Hansen was one of the first scientists to study the disease on a large scale. He decided to research leprosy because it had been around for thousands of years but no one knew what caused it or how to treat it.
Hansen thought that if he could find out more about the bacteria that cause leprosy, he might discover a cure for this terrible disease.
[Source: National Library of Medicine]
#15. Erwin Schrödinger (1887-1961): The Father of Quantum Theory Who Didn’t Believe In Personal God
Credits: New Scientist; Famous Atheist Scientists
Credits: New Scientist
What makes Erwin Schrödinger famous?
If you’ve ever been in a physics class, you might know Erwin Schrödinger—the Austrian physicist who won the Nobel Prize for his contributions to quantum theory.
Erwin Schrödinger was born in Vienna but later became Irish after his exile from Nazi-occupied Austria. He was proficient in both theoretical and experimental physics.
Erwin Schrödinger won the Nobel Prize in Physics for his contributions to quantum theory, which led to a better understanding of the Schrödinger equation.
He developed the Schrödinger equation, which explains how systems change over time. Scientists use this equation to study physics and chemistry.
Schrödinger also created a famous thought experiment called “Schrodinger’s Cat” to explain concepts related to quantum mechanics.
This experiment involves placing a cat in a box with a vial of poison and a radioactive substance. Whenever someone opens the box, they will have no way of knowing if the cat is alive or dead until they look inside.
[Source: The Nobel Prize ]
#14. Ted Nelson (1937-present): The Pioneer of Hypertext
Credits: Quartz; Famous Atheist Scientists
Credits: Quartz
What makes Ted Nelson famous?
Ted Nelson is a man who can make you feel like you’re traveling through time and space with him. His work in computer science is so far-reaching that it’s hard to believe that he wasn’t born in the 21st century.
You can tell that he’s seen and done things that made him question how we look at information and process it. He has the mind of an explorer, and his passion for the future is contagious.
Born in Chicago, Ted Nelson is a man of many hats. He’s a sociologist, computer scientist, philosopher, and writer. Nelson is also a literary romantic who thinks of himself as an American Orson Welles of software.
He is famous for coming up with the terms hypermedia, hypertext, intertwingularity, virtuality, and translucation. These terms are important to computer science as they relate to how we interact with information.
What’s the best Ted Nelson quote?
[Source: Ted Nelson]
#13. Stephen Hawking (1942-2018): One of The Greatest Scientists of Our Time
Credits: Daily News Egypt; Famous Atheist Scientists
Credits: Daily News Egypt
What makes Stephen Hawking famous?
Stephen Hawking was a man who made an indelible mark on the world. He was born in 1942 in England and is best known for his work in theoretical physics.
He contributed to the study of the universe by focusing on its origin and structure, which continues to appeal to millions worldwide.
In addition to being a brilliant scientist, Stephen Hawking was also an avid writer. His books continue to appeal to millions of readers globally.
What is the best Stephen Hawking quote?
” Intelligence is the ability to adapt to change.”
[Source: Encyclopedia Britannica ]
#12. Pierre Curie (1859-1906): The Nobel Laureate
Credits: History and Biography; Famous Atheist Scientists
Credits: History and Biography
What makes Pierre Curie famous?
Pierre Curie, a French physicist and chemist, was considered one of the most influential people in history. He was born in Paris in 1859.
Curie’s most famous discovery was radioactivity. He discovered that certain elements were radioactive, emitting radiation when they changed from one form to another. He also studied magnetism, crystallography, and many other fields.
This discovery earned him the 1903 Nobel Prize in Physics, alongside Marie Curie (his wife) and Henri Becquerel.
Curie was the first married couple to be awarded a Nobel Prize for their work—they (with Marie Curie) received it jointly in 1903 for their work on radioactivity and magnetism.
What’s the best Pierre Curie quote?
[Source: The Nobel Prize]
#11. Irène Joliot-Curie (1897-1956): The Woman Who Followed in Her Parents’ Footsteps
Frederic and Joliot Curie. Credits: Encyclopedia Britannica ; Famous Atheist Scientists
Frederic and Joliot Curie. Credits: Encyclopedia Britannica
What makes Irene Joliot-Curie famous?
Irene Joliot-Curie, the French physicist and chemist, was born in Paris in 1897. She grew up with her parents, who were both scientists. Her father, Pierre Curie, was a physicist who discovered polonium in 1898. Irene’s mother, Marie Sklodowska, was also a scientist and chemist.
In 1923, Irene married Frédéric Joliot-Curie, who was also a chemist. Together, they worked on many projects, including discovering how chemicals react when exposed to radiation. In 1935, they won their Nobel Prize in Chemistry, becoming the second married couple to win it after her parents.
What’s the best Irène Joliot-Curie quote?
“The farther the experiment is from theory, the closer it is to the Nobel Prize.”
[Source: The Nobel Prize]
#10. Frederic Joliot-Curie (1900-1958): One of The Scientists Who Contributed To The Curie Family Legacy
Credits: Science History Institute; Famous Atheist Scientists
Credits: Science History Institute
What makes Frederic Joliot-Curie famous?
Frederic Joliot-Curie, born in Paris in 1900, was a French physicist and husband to Irene Joliot-Curie.
Frederic Joliot-Curie is known for his research on atomic structure, nuclear physics, and molecular biology. He worked at the Radium Institute with his wife, Irene (who later became a famous scientist herself).
His work focused on using radioactive isotopes to study chemical bonding and molecular structure. This research led him to develop new methods of producing radioactive isotopes from stable isotopes through neutron bombardment or alpha decay.
Frederic Joliot-Curie and Irene Joliot-Curie are best known as the second married couple to receive the prestigious Nobel Prize. They won the 1935 Nobel Prize in Chemistry for discovering induced radioactivity.
The Curie family won five Nobel Prizes, including Marie Curie herself.
[Source: The Nobel Prize]
#9. John McCarthy ( 1927-2011): The Turing Award-Winning Computer Scientist
Credits: The Independent; Famous Atheist Scientists
Credits: The Independent
What makes John McCarthy famous?
John McCarthy was an American cognitive and computer scientist best known for founding the field of artificial intelligence. He developed Lisp, a language family that has spawned many other programming languages.
McCarthy is credited with the invention of garbage collection in computer science. This is the process of automatically freeing up memory from objects that are no longer needed by programs.
His work played a significant role in the creation of the ALGOL programming languages. McCarthy won the highest prize in computer science, the Turing Award, in 1971 for his contributions to AI.
[Source: Encyclopedia Britannica]
#8. J. Robert Oppenheimer (1904-1967): The Father of Atomic Bomb
Credits: Biography ; Famous Atheist Scientists
Credits: Biography
What makes J. Robert Oppenheimer famous?
If you’re unfamiliar with J. Robert Oppenheimer, it may be because he’s not a household name. But he is one of the most influential people in modern history—not just because he helped develop the atomic bomb that ended World War II.
Oppenheimer was born in 1904 in New York City and received his education from several institutions, including Harvard College and Cambridge University. He was an expert on nuclear physics and would lead the team of scientists who designed the nuclear bomb used in World War II.
Although he was a member of President Truman’s Atomic Energy Commission, he criticized the development of a thermonuclear weapon.
Afterward, he became a key figure in President Dwight Eisenhower’s “Atoms for Peace” program, promoting peaceful uses of nuclear power and development.
What’s the best J. Robert Oppenheimer quote?
“I become death, the destroyer of the worlds.”
[Source: Atomic Heritage Foundation]
#7. Richard Feynman (1918-1988): The ‘Little Jewish Boy’ Who Became A Nobel Laureate
Credits: Archinect; Famous Atheist Scientists
Credits: Archinect
What makes Richard Feynman famous?
Richard Feynman was a man who stood at the intersection of two worlds. He was born in New York City in 1918 to Jewish parents. Feynman studied at the Massachusetts Institute of Technology. He went on to earn his Ph.D. from Princeton in 1942.
This theoretical physicist served at the US lab under brilliant genius J. Robert Oppenheimer at the Manhattan Project during World War II. He helped construct the atomic bomb that stopped the war.
After the war, Feynman turned his attention to particle physics and developed what we now call the parton model, which explains how quarks are formed within the atomic nucleus. He also helped lay the groundwork for quantum electrodynamics theory, which is still widely used in physics research labs worldwide.
Feynman received a Nobel Prize for his work on quantum electrodynamics in 1965, alongside two other scientists (Julian Schwinger and Shin’ichiro Tomonaga).
What’s the best Richard Feynman quote?
[Source: The Nobel Prize]
#6. James D. Watson ( 1928-Present): Father of DNA Research
Credits: Wikipedia ; Famous Atheist Scientists
Credits: Wikipedia
What makes James D. Watson famous?
There have been a lot of famous left-handed scientists in the world, but none is as renowned as James Dewey Watson. You might not know the name James Dewey Watson, but if you’re a science nerd, chances are you’ve heard of him.
For starters, he’s been called “the father of DNA research.” He was born in 1928 in Chicago. He went to the University of Chicago for his undergraduate degree in genetics before moving to Indiana to complete his doctorate.
Watson is best known for his work with Francis Crick and Maurice Wilkins on the structure of DNA. He proposed that DNA is a double helix that consists of molecules wrapped around each other like a ladder.
This was the first time anyone had suggested such a structure for DNA. This study contributed to our understanding of how genes are passed on from generation to generation.
Watson shared a Nobel Prize with Maurice Wilkins and Francis Crick for their contributions to understanding Nucleic acids.
What’s the best James D. Watson quote?
[Source: The Nobel Prize]
#5. Timothy J. Berners-Lee (1955-present): The Inventor of The World Wide Web
Credits: W3C; Famous Atheist Scientists
Credits: W3C
What makes Timothy Berners-Lee famous?
Tim Berners-Lee is a computer scientist born in London in 1955. He attended school at Emanuel School and then went on to earn a BA in physics at The Queen’s College in Oxford.
Berners-Lee is best known for creating a way for computers to communicate with each other over networks using simple commands. This invention has led to many new ways of using computers and changed how people use the internet daily.
This computer science expert was also one of the founders of Europe’s first global hypertext information system, now known as HTML (HyperText Markup Language). One example of this system is “hyperlink,” which allows you to embed links in documents on web pages.
Berners-Lee continues to work on improving how people interact with technology through his foundation called The World Wide Web Foundation (WWWF).
What’s the best Timothy Berners-Lee quote?
[Sources: Encyclopedia Britannica, WWWF]
#4. Carl Sagan (1934-1996): The Man Who Brought US The Universe
Credits: Smithsonian Magazine; Famous Atheist Scientists
Credits: Smithsonian Magazine
What makes Carl Sagan famous?
Carl Sagan was a planetary scientist born in Brooklyn in 1934 and studied physics at the University of Chicago. He is known for popularizing science through his books and TV shows in the 1970s. His ideas about extraterrestrial life and the science behind them were revolutionary at the time.
Sagan served as a NASA adviser and conducted research largely on extraterrestrial life throughout his career. He is most famous for his work with Voyager Satellites in sending messages to outer space, which he wrote and narrated.
Carl Sagan won several awards for his contributions to cosmology, astronomy, and science in general. He received NASA’s Distinguished Service Award in 1977 and the Pulitzer Prize the following year.
[Source: Smithsonian Magazine]
#3. Linus Pauling (1901-1994): A Pioneering Scientist Who Defied The Odds To Win Two Nobel Prizes
Credits: The Famous People ; Famous Atheist Scientists
Credits: The Famous People
What makes Linus Pauling famous?
There are a lot of reasons to be inspired by Linus Pauling.
Born in Portland, Oregon, he became one of the most influential American scientists in history. His work helped revolutionize our understanding of the chemical bond and its properties, leading to a host of breakthroughs in medicine and science. He also played a big part in the creation of synthetic plasma, which is now used in many modern technologies.
His research on proteins would prove life-changing. He found evidence that abnormal hemoglobin was the cause of sickle cell anemia. This discovery opened the door to more research on molecular genetics.
Linus Pauling was also an outspoken proponent of nuclear disarmament. He won two Nobel Prizes for his efforts to promote peace and groundbreaking scientific research.
What’s the best Linus Pauling quote?
[Source: Nobel Prize]
#2. Alfred Nobel (1833-1896): The Founder of Nobel Prizes
Credits: News18; Famous Atheist Scientists
Credits: News18
What makes Alfred Nobel famous?
Yes, the Nobel Prize is named after Alfred Nobel.
Alfred’s most famous invention was dynamite, which he developed using nitroglycerin and an absorbent substance patented in 1867.
It was a breakthrough in the field of explosives, giving us everything from fireworks to blasting caps.
Dynamite was popular because it allowed people to blow up things without getting hurt—and it wasn’t just for fun! Dynamite is useful in demolition, construction, mining, and quarrying.
But Alfred’s most crucial contribution was establishing the Nobel Prize, which honors those who make significant contributions to society through their work in physics, chemistry, peace, medicine, or literature.
In 1968, the Swedish central bank, Sveriges Riksbank, established the Nobel Prize in Economics.
What’s the best Alfred Nobel quote?
[Source: The Nobel Prize]
#1. Alan Turing (1912-1954): The Father of Modern Computer Science
Famous Atheist Scientists; Credits: War History Online
Credits: War History Online
What makes Alan Turing famous?
Alan Turing is dubbed the “father of computer science.”
You may have come across someone saying something is “Turing complete.” The term implies it can be programmed to perform virtually any task.
You may have also heard of the Nobel Prize for Computer Science. That’s right. It’s the Turing Award.
Finally, the 50-pound bank note. If you have seen the movie “Enigma,” then you will realize that Turing was instrumental in winning the information warfare during WWII against Nazi Germany.
Born in 1912 in Maida Vale, London, Alan Turing was educated at Sherborne School in Dorset and then attended King’s College, Cambridge, where he studied mathematics as an undergraduate.
After graduating with a Ph.D. in Mathematics from Princeton University in 1948, he worked at Manchester University’s Computing Machine Laboratory, contributing to machine learning algorithms and artificial intelligence research.
In his paper on “Computing Machinery and Intelligence,” Turing described how machines could think (albeit incorrectly) through their ability to solve problems according to rules.
Turing also introduced the concept of a Universal Turing Machine (UTM), which remains a key theoretical computer science concept today.
Turing joined the Government Code & Cypher School, where he worked on radar systems for Britain’s war effort during World War II. During this time, he was responsible for breaking the German Enigma codes used by German forces during WWII. This gave the allies significant advantages.
Turing is not just remembered for his groundbreaking work in mathematics and computer science—he’s also known for being the first gay man to have his face on a banknote in the United Kingdom.
Alan Turing faced persecution by the government due to his sexuality. These woes may have led to his death in 1954.
What’s the best Alan Turing quote?
[Sources: Encyclopedia Britannica, Stanford Encyclopedia of Philosophy]
Final Thoughts
We hope you enjoyed reading this article as much as we loved writing it.
The scientists mentioned in this article are interesting people with unique stories and life experiences.
They have all made important contributions to society, but they also share the common thread—a lack of faith in religion.
This article may help you understand the thinking behind this group of people who are often unfairly judged for their lack of faith in religion.
Photo of author
Joannah W.
Joannah has been a science publisher with close to 20 years of experience. She wants to help students and researchers stay ahead of the trends and developments in the science community by making science more accessible to everyone.
Leave a Comment |
445677798c185167 | The Born-Oppenheimer Approximation
The Born-Oppenheimer approximation is the first of several approximations used to simplify the solution of the Schrödinger equation. It simplifies the general molecular problem by separating nuclear and electronic motions. This approximation is reasonable since the mass of a typical nucleus is thousands of times greater than that of an electron. The nuclei move very slowly with respect to the electrons, and the electrons react essentially instantaneously to changes in nuclear position. Thus, the electron distribution within a molecular system depends on the positions of the nuclei, and not on their velocities. Put another way, the nuclei look fixed to the electrons, and electronic motion can be described as occurring in a field of fixed nuclei.
The full Hamiltonian for the molecular system can then be written as:
\mathbf{H} = \mathbf{T}^{elec}(\vec{r}) + \mathbf{T}^{nucl}(\vec{r}) + \mathbf{V}^{nucl-elec}(\vec{R},\vec{r}) + \mathbf{V}^{elec}(\vec{r}) + \mathbf{V}^{nucl}(\vec{r})
The Born-Oppenheimer approximation allows the two parts of the problem to be solved independently, so we can construct an electronic Hamiltonian which neglects the kinetic energy term for the nuclei:
\begin{aligned} \mathbf{H}^{elec} &= -\frac{1}{2}\sum\limits_i^{elecs}(\frac{\partial^2}{\partial x_i^2}+\frac{\partial^2}{\partial y_i^2}+\frac{\partial^2}{\partial z_i^2}) - \sum\limits_i^{elecs}\sum\limits_I^{nucl}(\frac{\mathrm{Z}_I}{\lvert \vec{R_I}-\vec{r_i} \rvert}) &+ \overset{elecs}{\sum\limits_i \sum\limits_{j \textless i}}(\frac{1}{\lvert \vec{r_i}-\vec{r_j} \rvert}) + \overset{nucl}{\sum\limits_I \sum\limits_{J \textless I}}(\frac{Z_I Z_J}{\lvert \vec{R_I}-\vec{R_J} \rvert}) \end{aligned}
Note that the fundamental physical constants drop out with the use of atomic units.
This Hamiltonian is then used in the Schrödinger equation describing the motion of electrons in the field of fixed nuclei:
\mathbf{H}^{elec}\psi^{elec}(\vec{r},\vec{R}) = E^{\textit{eff}}(\vec{R})\psi^{elec}(\vec{r},\vec{R})
Solving this equation for the electronic wavefunction will produce the effective nuclear potential function . It depends on the nuclear coordinates and describes the potential energy surface for the system.
Accordingly, Eeff is also used as the effective potential for the nuclear Hamiltonian:
\mathbf{H}^{nucl} = \mathbf{T}^{nucl}(\vec{R})+ E^{\textit{eff}}(\vec{R})
This Hamiltonian is used in the Schrödinger equation for nuclear motion, describing the vibrational, rotational, and translational states of the nuclei. Solving the nuclear Schrödinger equation (at least approximately) is necessary for predicting the vibrational spectra of molecules. |
44797c06e989c9ad | 30 September 2022
Some Issues of Pāli Chronology.
The matter of which parts of the Pāli sutta-piṭaka are older is one that has a tragic past. The first scholar to look systematically at the issue was Caroline Augusta Foley Rhys Davids. As a student, Caroline A. Foley married her (much older) Pāli teacher, Thomas Rhys Davids and together the pair [pictured left] not only became the leading experts on Pāli but also created a lasting organisation, The Pali Text Society (founded 1881), and made the Pāli suttas available to the masses, perhaps for the first time.
Caroline and Thomas had three children but their son Arthur was the apple of her eye. The Rhys Davids family archive (Cambridge University) contains no less that 262 letters from Arthur to his mother. Arthur is famous in his own right for being a highly decorated fighter pilot in WWI (one of the original "aces"). But he was tragically killed in action in 1917.
Caroline was heartbroken and like many others of that time, she turned to spiritualism seeking a sense of connection with Arthur. She was a very intelligent and successful woman and she did not start attending tawdry seances or consulting fraudulent "mediums". Rather she took up the more private practice of "automatic writing". This involves taking both sides of a written conversation, but in a detached way that allows a stream of consciousness to flow. She filled many notebooks in this way and they are still held in the archives. This turn only intensified after the death of Thomas Rhys Davids in 1922.
This change in her circumstances forced Rhys Davids to confront the Buddhist view, which till then she had accepted, that there is no soul, nothing substantial that can pass from one life to the next. This would make spiritualism practically impossible. She began to comb through the suttas and eventually concluded that the Buddha had taught an ātman doctrine after all, but covertly, and thus she rescued spiritualism from Buddhism. Much to the disgust of her colleagues, I gather. But Rhys Davids was ambitious and talented, and her next move was to try to prove that the Buddha's ātman doctrine was older than the Buddhist anātman doctrine.
Rhys Davids invented the methods which we use to form conjectures regarding the relative dating of suttas. Still, as with so much else about early Buddhism, there is no external evidence with which we can corroborate or refute these conjectures. We do know that Pāli was a somewhat artificial language build on one or more Prakrit languages. Pāli was likely never anyone's first language, but was rather a "church language" that could be a lingua franca for Prakrit speakers. These days we might call it a "conlang".
We have to keep in mind that our evidence for Pāli in the ancient world is scarcely better than our evidence for the Buddha (which is nonexistent). The very oldest extant Pāli document is a small piece of gold foil from the sixth-century. The oldest complete Pāli Canon is no older than the 15th century. People say that the Pali canon was written down in the first century, but this is conjecture based on internal references in documents that post-date the suttas by several centuries. The whole history of Buddhism is based on such conjectures with little or no supporting evidence; or based on naive use of religious documents for historical purposes. Scholars have, until recently, simply accepted the emic accounts of Buddhist history, adopted emic terminology and time periods, and generally been far too credulous with respect to tradition.
By way of contrast we have several very old physical manuscripts of Buddhist texts from Gandhāra that can be carbon-dated to the first or second century before the Common Era. These are, in fact, the oldest Buddhist documents of any kind. Moreover, a text like the Aṣṭasāhasrikā Prajñāpāramitā is known to have been written by the end of the first century CE because, again, we have a carbon-dated manuscript on birch bark. This is about 400 years earlier than the first physical evidence of Pāli texts.
A few passages in Pāli contain evidence of case endings from a different Prakrit dialect than the one that mainly forms the basis of Pāli. For example, in Pāli the nominative form of the stem buddha is buddho. E.g. buddho dhammaṃ deseti "the Buddha teaches the Dhamma". In day-to-day use, the nominative singular is considered the most basic form of the verb. Traditional dictionaries, for example, use the nominative form. European dictionaries of Indic languages tend to use the "stem" form, a notional form that is rarely (if ever) encountered in practice that has no case information. The only place they regularly crop up is as the first member of a compound word.
In a few cases in Pāli we see a nominative form like buddhe. Same word, same case, different pronunciation. Think here about the Heart Sutra dhāraṇī: gate gate... scholars have long tried to shoehorn this into a Classical Sanskrit mould, but really it's Prakrit. "Gate" is not some convoluted feminine locative of the past participle gata or whatever. Rather gate is the nominative singular of gata, i.e. it's just the basic form of a word in practical use in that dialect.
In stories about relative dates, the stray occurrence of such forms as a nominative singular in -e is seen as evidence of antiquity. The idea here is that the case market -e is archaic and older texts are more likely to have archaic forms.
Frankly, this makes no sense to me. Dialects are generally speaking regional. For example, people often remark on the Tibetan spelling of vajra, i.e. badzra. The substitution of /b/ for /v/ is normal for Eastern India. Tibetans got their Sanskrit terminology from Eastern India. The state of Bihar, for example, derives from the presence of many Buddhist vihara in the past. Indeed we sometimes see this variation in Pāli: both byāpāda "malevolence" (Skt vyāpāda). To the best of my knowledge, this substitution (or the similarly regional initial l/r substitution) are not seen as signs of antiquity.
We know, from the distribution of Asoka inscriptions, that eastern dialects of Prakrit prefered the -e ending. And the -e ending found in Pāli is sometimes called a "Magadhism" to reflect the usage in the language used in the Asoka inscriptions around Patna, the capital of Magadha. It's possible that what we call Māgadhī was the mother language of modern Bihari.
This is not to say that dialects did not change over time. Pāli is a Middle Indic language from a pre-classical form of Old Indic (not necessarily the same one as gave rise to Classical Sanskrit). If we accept the conjecture that Pāli was written down circa first century BCE, and that this fixed the forms at that point, though later editing is clearly evidenced, then we really have to wonder how an archaic form survived for several centuries. I can tell you that when you stumble across one of these forms in practice, it can be very confusing because buddhe is something in Pāli also. It is the locative singular (the locative is mainly used to indicate the location of the action of a verb in a sentence), e.g. buddho gahe dhammaṃ deseti "the Buddha taught Dhamma in the house". This is to say, that these odd case ending stand out; one stumbles over them. How does something like that survive in an oral literature for centuries when every time one encounters it, one is struck by cognitive dissonance.
On the other hand some Magadhisms are ubiquitous, such as the honorific bhante (vocative singular) or the term yebhuyyena "generally" which corresponds to Sanskrit yadbhūyasā. Following regular patterns of sound change, we expect the Pāli to be yad-bhūyena or yad-bhiyyena. Ye is Māgadhī for yad. And note also that we have some Sanskrit loanwords like brāhmaṇa for which we expect the Pāli to be bāmaṇa (see my discussion of this: A Pāli Pun).
Had we not been looking for evidence of chronology we might have concluded those texts that preserve so-called Magadhisms were preserved in a Māgadhī-speaking region where they recognised the forms. In other words, these Magadhisms in Pāli need not be evidence of change over time, they may reflect a text compiled in a different region. The presence of alternative case endings reflect contemporary regional differences in pronunciation. Not that this conjecture is any more solid than the change over time conjecture. Once again, we simply don't know. A chronological explanation is not the most obvious one to me and I think some kind of geographical explanation is probably better.
Another argument for the antiquity of some texts is that they are "less systematic" (with reference to the standardised Buddhism of modern Theravāda) and thus older. This is a form of the teleological fallacy. The idea here is that ideas become more sophisticated and more organised over time. The presumption here is also that the Pāli texts are otherwise homogeneous and forms a static backdrop against which change can be discerned. I would argue that Theravāda Buddhism as we meet it in the twenty-first century is a simplified, less sophisticated form of the pluralistic Buddhism we find in early Buddhist texts.
I once again refer readers to my chart of nidāna doctrines. Here we see a number of different lists with different sequences. We note many variants of the standard nidāna sequence with fewer members, notably what's missing is often the first two items: avidyā and saṃskāra. One of the main variants (DN 15) begins with nāmarūpa and vijñāna mutually conditioning each other.
Some of the texts use very different terminology. It is true that one of these is in the Suttanipāta (Sn 862-877) which experts say makes it old, based on the methods we are exploring now. But the sequence in DN 21 is just as odd. Moreover it partially reverses the order of causality found in, say, MN 18. A more sophisticated variant of the standard 12 nidānas is also found in Suttanipāta (Sn 722-765) which in the standard view makes is later than most other nidāna texts.
The idea of using structural features like how "systematic" a doctrine is to determine relative age is starting to look quite doubtful. The logic of it does not account for which variations of the nidānas that we find here. Again, the standardisation on 12 nidānas, ignoring variants, is considerably less sophisticated than we see in Pāli. In order to interpret such differences in terms of chronology we have to presume that differences are caused by passing time (that is to say time passing is what causes variations to arise). But again, we could have chosen to see these as contemporary sectarian differences for example. It's only when we ignore the obvious sectarianism in Pāli that we see anything like an "underlying unity".
Under this heading we may also discuss the fact that no one claims that the whole Suttanipāta is old. Only parts of it. And yet there is no evidence that the Suttanipāta circulated in fragments. It is true that a few parts of Suttanipāta turn up in other places. The Sela Sutta (Sn 3.7) is identical to MN 92, while the Vāseṭṭha Sutta (Sn 3.9) also occurs at MN 98. But this just tells us that the compiler of the Suttanipāta had access to the same sources as the compiler of the Majjhimanikāya
I know little about meter, though I have dabbled in analysing Pāli meter from time to time. The argument in this case is that the use of certain metres in, say, Suttanipāta reflects antiquity. These are referred to by Roy Norman as "old metres". I've never been clear how anything can be considered a priori "old". We have no evidence of Buddhist literature before the written texts. We cannot judge the antiquity of a metre in a Buddhist text from the types of metres used in non-Buddhist texts. And I cannot think how else this chronological distinction could be made.
Metred verses are common enough in Pāli and some texts show a preference for one metrical scheme over another. But in a Buddhist context, what constitutes "old"? Old in comparison to what? We have nothing to base a chronology of metre on in a Buddhist context. We may be able to say that non-Buddhist texts show an evolution of the use of metres, but showing that the same evolution happened in Buddhist texts is not possible.
A number of suttas quote other suttas, sometimes verbatim, sometimes by name. An example I have encountered is the Channa Sutta (SN 22.90) in which Ānanda recalls hearing the Kaccānagotta Sutta (SN 12:15) being preached. The Channa Sutta repeats the Kaccānagotta Sutta in its entirety. And this, so the argument goes, proves that the Channa Sutta presumes the prior existence of the Kaccānagotta Sutta. But does it? What if the Channa Sutta is the original context for this text and the Kaccānagotta Sutta is simply a cut-down version of the story?
This alternative possibility is given credence when we study the Pāli version alongside the Āgama version which exists in a Chinese translation (from a Prakrit other than Pāli) and a Sanskrit translation that more or less corresponds to the Chinese. In my (to date) unpublished study of the three versions side by side I note:
[The Pāli] as well has having an abbreviated opening, has no end. It just finishes abruptly, and this reinforces other hints that it is a somewhat fragmented memory of the text.
It is clear that the Pāli record of the Discourse to Kātyāyana is not the ur-text. It's a fragment, with a slightly different selection and arrangement of sentences than the Chinese or Sanskrit versions.
One of the most striking examples of interpolation I know of comes in the Mahāparinibbāṇa Sutta (DN 16; DN II.141). In the middle of a discussion between Ānanda and the Buddha about the Buddha's funeral arrangements, we suddenly find Ānanda asking "How should we behave towards women?" (Kathaṃ mayaṃ, bhante, mātugāme paṭipajjāma). The word for "women" here is a colloquialism made from mātā "mother" (in the genitive case mātu) and gāma "village". The Buddha tells Ānanda to ignore them (adassanaṃ; literally "don't look"). After a few more lines of this misogyny, we go back to discussing the Buddha's funeral arrangements. The change of subject is quite disorientating.
In this one case, I agree that there is an obvious reason to consider the passage on how to behave towards women has been inserted into the text at some point after the text was initially composed. It was done so badly that we cannot help but be struck by the incompetence of the editor. Still, it was done early enough to be considered canonical. On the other hand, this interpolation does not occur in any other surviving version of the Mahāparinibbāna Sutta. So we can conjecture that it was a Theravādin monk who did the interpolating. The negative attitude towards women is typical of Theravāda monasticism.
On the other hand the sharing of passages between texts is so common as to constitute a major feature of Pāli texts. Suttas have a kind of modular structure with a framework of common tropes and expressions (aka pericopes). Shared stock passages are the norm.
Compare the Pāli, Chinese, and Sanskrit texts of the Kaccānagotta Sutta, for example. They are all closely related, but in some cases whole phrases are present or missing in one. And this in a very short text. Quotations in Mahāyāna texts make it seem that the core of this text is a statement against applying the duality of existence/nonexistence (astitā/nāstitā) to the world (loka). The framing details of this important statement vary considerable. Notably the nidāna is different in all three, and the name of the main protagonist also takes three different, though closely related, forms, i.e. P. Kaccāna, Skt. Sandhākātyāyana, C. Shāntuó Jiāzhānyán 𨅖陀迦旃延. Moreover Sandhākātyāyana is almost certainly a mistake for Saddhā Kātyāyana, which means something like "Faithful youth of the Kātya clan."
Having briefly surveyed the main kinds of evidence that are used to try to establish a relative chronology within the Pāli suttas, I find that, except in the case of obvious interpolations, I can always think of a plausible alternative reading that does not result in any chronological speculation. That which is presented as evidence of chronology could just as well be evidence of regionalism or sectarianism
The idea that we can discern any systematic chronology within Pāli suttas seems quite fanciful. It certainly appeals to Buddhist theologians, but it's a house of cards. The foundations are essentially religious beliefs that are not open to discussion. The most striking of these is the religious conviction that the Buddha was a real person. This is axiomatic for example, for bhikkhus Sujato and Brahmali who have put a great deal of effort into arguments for the "authenticity" of the Pāli canon. But their definition of authenticity is itself incoherent. In their accounts of authenticity they assume that both the Buddha and Ānanda were real people who were just as described in the literature. There are no external criteria because there is no external evidence. Thus the whole rests on religious commitments rather than historical facts or events.
Moreover, the kind of relative chronology that is produced by these speculations offers little in the way of explanatory power. It is self-contained with very few exceptions, the most notable being that the cities in which the stories are largely set are real cities, although none of the characters in the stories can be considered historical characters.
As someone who likes to state clear conclusions, I sympathise with the historians who scrabble around trying to put things in chronological order. But the very aim of the project—to produce a chronology—determines what kind of outcome we get, i.e. a chronology. Other explanations for the same facts are never even considered as far as I can see. And despite all the efforts that go into this project we still cannot explain anything of importance using this artificially constructed chronology. This is partly because the relative chronology is not anchored to history at any point. Again the lack of external evidence of any kind is telling.
As far as I can see the only "real" thing in the Pāli suttas are the cities. The stories are set in cities that we know from archaeology. I've walked among the ruins of Sāvatthī for example. One can see it on Google Maps. The evidence that comes from analysing religious texts is something else again. And this may be part of the problem. Historians of Buddhism seem to forget that Buddhist texts are religious texts. And I'm not the only one pointing out the problems with this.
23 September 2022
Just How "Crazy" is the Heart Sutra?
I recently read Karl Brunnhölzl’s absurdist article “The Heart Sutra Will Change You Forever” in the Buddhist magazine Lion’s Roar (September 29, 2017). I composed this response and sent it off to the magazine asking that they consider printing it, but they did not respond at all (Unlike Brunnhölzl, I have no caché in the world of North American celebrity Buddhism).
Brunnhölzl adequately covers the basic ground as established by D. T. Suzuki and Edward Conze, and even cites Sangharakshita, the founder of the Order I was ordained in. However, based on 10 years of forensic research and fourteen published articles on the Heart Sutra, I thoroughly disagree with Brunnhölzl (and Sangharakshita) over what the Heart Sutra is about or how it was intended to work.
Brunnhölzl begins by stating:
He goes on in this vein for quite some time. Why do people say things like this about the Heart Sutra? We know, from Michel Foucault’s book Madness and Civilisation, that “crazy” is an ambivalent term in Europe and her colonies, especially since the 19th century Romantic/Idealist movement in Europe (and the parallel of Transcendentalism in the USA). In the romanticized view, the madman often stumbles on the truth precisely because they lack rational faculties. In reality, being crazy is an entirely unromantic catastrophe. Madness is a terrible affliction and people who are insane are inevitably the most unhappy people of all. We should really stop trying to make it sexy.
By asserting the "craziness" of the Heart Sutra, Brunnhölzl can be seen to both acknowledge that the Heart Sutra is confusing for most readers and to celebrate that ongoing confusion as a positive. Like many Buddhists who tell us that we cannot possibly understand the Heart Sutra, he then goes on to tell us (without a hint of irony) exactly how to understand the Heart Sutra. And he does so without apparent confusion on his part. Conze was a master of this old rhetorical trick of intimating that he was a Master of secret knowledge that we was willing to share with us.
When I read the text closely and across canonical languages, however, I arrive at a very different conclusion. For a start, there are several mistakes in the Sanskrit text, as edited by Conze. I have outlined these errors in my published articles and shown how to resolve them and have recently submitted some revised editions to a peer-reviewed journal (fingers crossed). There are also several ancient mistakes in the Chinese text. These were detailed by Matthew Orsborn aka Huifeng (2014). And when we deal with all these textual errors the whole business of paradox and contradiction simply disappears. There are no contradictions in the Heart Sutra. The Heart Sutra is not "crazy", not even a tiny bit. Then again, nor is it a text for beginners. It has a context and that context can take some years of study to understand. And I've had to do that without a teacher.
At any time since Conze published his edition in 1948, the mistakes he made could have been repaired. Illustrious scholars (including Brunnhölzl), deeply versed in Buddhist canonical languages and doctrines, have read the text and simply overlooked all of these problems. It appears that when one expects nonsense in a text, one is unable to distinguish between simple grammatical errors and genuine mysticism. Readers should keep in mind that all modern translations are based on faulty recensions of the text. If something doesn’t make sense, then it’s probably a mistake.
The “crazy” approach of asserting that all contradictions are true (A is not-A) was very much an aspect of D.T. Suzuki’s approach to Zen Buddhism and was taken up enthusiastically by Edward Conze. Conze had already arrived at similar conclusions while he was a grad student. His dissertation on Aristotle’s law of noncontradiction was published in 1932, but subsequently burned with other Marxist tracts by the Nazis (meaning that Conze's doctoral-level academic qualification was incomplete). Both men’s views on this were shaped by their reading of the Vajracchedikā Prajñāpāramitā (incongruously) known as the Diamond Sutra. In 2006, Paul Harrison showed that Suzuki and Conze had misread the Vajracchedikā. The apparent contradictions they saw there are based on a misunderstanding of the Sanskrit grammar (which does not occur in the Tibetan translations). Richard H. Jones has independently confirmed this in his work. Rather, the Vajracchedikā takes what is generally called a “nominalist” approach of asserting that abstractions are not entities, they are ideas about entities. Just because we have a name for an idea, does not make it a thing.
There are no contradictions in the Heart Sutra and no contradictions in the Diamond Sutra. There are only contradictions in the minds of Buddhists who cannot adequately parse a Sanskrit text.
Another problem highlighted by both Jones and Huifeng is the tendency to read Prajñāpāramitā through a Madhyamaka filter or, worse, as unadorned Madhyamaka. Although there is a old Buddhist tradition of doing so, it is wholly unjustified and distorts the message of the texts. We need to be clear that Prajñāpāramitā is neither Madhyamaka nor proto-Madhyamaka. I suspect, but cannot yet prove that Madhyamaka had begun to influence Prajñāpāramitā by the time the Large Texts (in 18k, 25k, and 100k lines) began to be produced.
As Sue Hamilton has said of early Buddhism, it was not concerned with whether or not something exists, nor with what something is or is not. Rather, early Buddhists were concerned with experience and the cessation of experience. Commenting on his repaired text of the Heart Sutra, Huifeng (2014) argued that it suggested the necessity of an epistemic approach to the Heart Sutra. In my recent article (2022) on Prajñāpāramitā and cessation, I started to outline what such an epistemic approach would look like. Here I will précis that approach (at the risk of oversimplification).
Since Jan Nattier’s (1992) landmark article we have known that the Heart Sutra is a Chinese text. This result has been independently verified by Huifeng (2014) and by me (see esp Attwood 2021). Huifeng (2014) showed that where the Sanskrit Heart Sutra text reads aprāptitvād, the Chinese text has a jargon term—yǐwúsuǒdégù 以無所得故—coined by Kumārajīva specifically to translate anupalambhayogena “by means of practising nonapprehension”. This discovery has some major implications. For one thing, this fact can only be explained as a translation error going from Chinese to Sanskrit, not the other way around. The term anupalambhayogena is frequently used in the Large Prajñāpāramitā Text to qualify statements. So, for example, in Chapter 16 of Conze’s Large Text translation (p. 153 ff.) we see this term being used to qualify answers to the question “What is Mahāyāna?” It turns out to be the thirty-seven bodhipakṣa-dharma, but with this qualification, i.e. “by practising nonapprehension” (tac cānupalambhayogena). Note that Conze mistranslates this term as “without a basis” about half the time.
The essence of Prajñāpāramitā practice, in this view, is nonapprehension (anupalambha). Huifeng, Anālayo, and I all independently realised that this must relate to the Pāli Cūḷasuññata Sutta (MN 121) which describes a meditation practice in which one withdraws attention from sensory experience causing it to stop arising and ultimately leaving the meditator in a state called suññatāvihāra "dwelling in absence [of sensory experience]. In parallel texts from the Chinese Āgama translations, this is referred to as kōng sānmèi 空三昧 (Skt śūnyatā-samādhi) (Choong 1999).
Let us look more closely at what one of the “crazier” passages says. This part of the text begins “In absence” (Ch. kōng zhōng 空中; Skt. śūnyatāyām). That is to say, in the samādhi of absence. In my view, this refers to a person who is meditating and has undergone the cessation of sensory experience (saṃjñā-vedayita-nirodha) and now dwells in the absence (śūnyatā) of sensory experience. In that state, no dharmas can arise because the conditions for their arising are absent. In standard dependent-arising doctrine, the absence of the condition prevents the consequent state from occurring. What follows is a list of lists, in which each member of the lists is negated. What no one realised until Huifeng (2014) was that there is a second qualification that comes immediately afterwards. As noted above, Huifeng shows that the lists are followed by this word, yǐwúsuǒdégù 以無所得故 and this means “by practising nonapprehension” (anupalambhayogena) rather than the usual “from a state of non-attainment" (aprāptitvāt). This tells us how the interlocutor arrived in the state in which one or more of the necessary conditions for the arising of sensory experience, usually attention, is absent.
Now, if I am in this state, then by definition there is no sensory experience. The existence of this state is confirmed by numerous accounts of meditation and now by neuroscientific studies. In this state, the skandhas, as the apparatus of sensory experience (c.f. Sue Hamilton 2000), have stopped functioning. The content of experience is minimal or absent. All of the categories of Buddhism are absent for anyone who is in that state. The text does not say that sensory experiences don’t exist, let alone that objects don’t exist. The whole rhetoric of existence and non-existence is irrelevant, as Elder Subhūti tells Elder Śāriputra in Chapter One of the Aṣṭasāhasrikā.
In other words, this is not, as popularly supposed, a statement that “form doesn’t exist” or a repudiation of the basic categories of Buddhist analysis. Instead, this is a straightforward statement about what it is like for sensory experience to stop, and why this state is the acme of Buddhism. It boils down to this: in the absence of sensory experience there is no sensory experience. This is so not crazy that it seems positively boring.
Another famously “crazy” passage comes a little earlier and equates kōng 空 (śūnyatā) with 色 (rūpa). We usually see this translated as “form is emptiness” and so on. Some translators and scholars persist in mistranslating rūpa as “matter” but this is an egregious mistake. In Sanskrit, rūpa means “outward appearance; visage”; it never connotes substance or matter. In Buddhist terms, rūpa is to the eye as sound is to the ear. We don’t hear the sound of a conch by cramming the shell into our ear canal. Sounds waves emanate from the object and stimulate our hearing sense at a distance. Buddhists intuited that something similar happened with sight, but they didn’t understand the physics of light well enough to just say, “Light reflected from the object hits the eye”. Rather they intuited that something (which they referred to as rūpa “appearance”) was given off by an object and it was this that crossed the distance between object and subject and hit the eye causing a visual experience. This understanding is reflected in the Chinese choice of 色 to translate rūpa. In Medieval Chinese, 色 meant “outward appearance” and in modern Chinese, it means “colour”. Most scholars try to say that rūpa-skandha must be something other than rūpa. Hamilton (2000) opts to refer to it as "body". In my view this must be incorrect. It is rather that rūpa, the appearance of a visual percept is here a metonym for all sensory appearances. (I've explained this recently in a blogpost: Notes on Translating the Skandhas (16 September 2022).
To understand this passage, we have to dig. We know for example, that the Large Text is an expansion of the Small or 8000 Line Text. Incidentally, the small/large (xiǎo 小/ 大) distinction was invented by Kumārajīva in the fifth century. Although the Large Text contains a lot of new material, we can often identify the corresponding passages in the Small Text. When we do this for the phrase rūpaṃ śūnyatā, we don’t immediately find anything. This is because in the Small Text the phrase is rūpaṃ māyā, i.e. “appearance is an illusion”. This statement does not exist in a vacuum, it occurs throughout Buddhist literature often in the form of a simile: rūpaṃ māyopamaṃ “appearance is like an illusion”. In his book, The Notion of Emptiness in Early Buddhism, Choong Mun-Keat (1999) has noted many instances of the word śūnyatā being shoehorned into Buddhist texts which didn’t originally include it. This reflects, I think, the growing influence of Madhyamaka and appears to have affected the Large Text much more than the Small.
The appearance of a sensory experience can be likened to an illusion, i.e. the illusion that is sensory experience. This is in no way paradoxical or contradictory. It certainly does not involve holding contradictory statements to be true. It is not at all crazy. Indeed, the idea that sensory experience is a kind of “illusion” is rather banal these days. We know that experience and reality are governed by different rules. Just because we represent the world to ourselves based on sensory experience, does not mean that the objective world is not real or nonexistent.
The Heart Sutra is demonstrably not “crazy”. The idea that it is or was “crazy” is rooted in misunderstanding the text and its practical context (especially the śūnyatā-samādhi). This is not to say that Buddhists are not fascinated by paradox, because evidently they are. Historically, however, contradiction played no role at all in Buddhist thought before Nāgārjuna. As Huifeng (2016) argues, the association of Prajñāpāramitā and Madhyamaka is not a given. The two earliest known Heart Sutra commentaries, from the late seventh century, both eschew the Madhyamaka connection in favour of a Yogācāra-inspired interpretation. To be fair, the Yogācāra reading is only marginally more coherent. It still stuffs the Heart Sutra in a box that it was not made to fit.
It is not until we begin to read Prajñāpāramitā as Prajñāpāramitā, i.e. until we pay attention to both text and context, that we begin to glimpse what the author(s) wanted us to see. Buddhists have long practised the techniques for bringing sensory experience to a halt. This is an aspect of early Buddhism, with hints that it might predate Buddhism (c.f. Anālayo 2022). And it means we need to step back from Madhyamaka metaphysics and consider Huifeng’s suggestion that we read the text more as epistemology than metaphysics. I find that Buddhism makes a great deal more sense when I take this approach. That is to say, I now read everything in Buddhist texts as being principally concerned with experience and the cessation of experience and I don't have to deal with any contradictions or paradoxes. The craziness is adventitious, not inherent. That is to say, it is projected onto the text, it does not emerge from the contents of the text. Contradiction plays no role in Prajñāpāramitā despite the central role it has in the thought of D. T. Suzuki and Conze
In this sense, Karl Brunnhölzl was right; studying the Heart Sutra did change my life. Not because "the Heart Sutra is crazy" but because I discovered that the Heart Sutra is not crazy. The Heart Sutra began to make a lot more sense when I dropped all the "crazy" nonsense and the unsupported metaphysical speculation and began to read it as being concerned with experience. Moreover, by applying this hermeneutic across the board, I was finally able to reconcile being a faith-type Buddhist with my love of science. Epistemic Buddhism does not encroach on the subject matter of science (i.e. ontology) leaving almost no room for conflict, whereas metaphysical Buddhism (which purports to inform us on the nature of reality) is almost a complete bust.
Further Reading
——. 2021. “Being Mindful of What is Absent.” Mindfulness 13: 1671-1678.
Attwood, J. (2021) “The Chinese Origins of the Heart Sutra Revisited: A Comparative Analysis of the Chinese and Sanskrit Texts.” Journal of the International Association of Buddhist Studies 44: 13–52.
——. (2022). “The Cessation of Sensory Experience and Prajñāpāramitā Philosophy.” International Journal of Buddhist Thought & Culture 32(1):111-148.
Brunnhölzl, Karl. (2017) “The Heart Sutra Will Change You Forever”. Lion’s Roar September 29, 2017. https://www.lionsroar.com/the-heart-sutra-will-change-you-forever/
Choong, Mun-keat. (1999). The Notion of Emptiness in Early Buddhism. 2nd. Ed. Motilal Banarsidass.
Hamilton, Sue. (2000). Early Buddhism: A New Approach. London: Routledge.
Harrison, Paul. (2006) “Vajracchedikā Prajñāpāramitā: A New English Translation of the Sanskrit Text Based on Two Manuscripts from Greater Gandhāra.” In Buddhist Manuscripts in the Schøyen Collection (Vol. III), 133-159. Hermes Publishing, Oslo.
Huifeng. (2014). “Apocryphal Treatment for Conze’s Heart Problems: Non-attainment, Apprehension, and Mental Hanging in the Prajñāpāramitā Hṛdaya.” Journal of the Oxford Centre for Buddhist Studies 6: 72-105.
——. (2016). Old School Emptiness: Hermeneutics, Criticism, and Tradition in the Narrative of Śūnyatā. Kaohsiung City, Taiwan. Fo Guang Shan. Institute of Humanistic Buddhism.
Jones, Richard H. (2012). The Heart of Buddhist Wisdom: Plain English Translations of the Heart Sutra, the Diamond-Cutter Sutra, and Other Perfection of Wisdom Texts. New York: Jackson Square Books.
Nattier, Jan. (1992). “The Heart Sūtra: a Chinese apocryphal text?” Journal of the International Association of Buddhist Studies 15 (2) 153-223
16 September 2022
Notes on Translating the Skandhas
I dislike it when translators adopt idiosyncratic translations, since they tend to dislocate us from the source text and the general body of translations. That said, I find some standard translations of Buddhist technical terms incomprehensible, even after almost thirty years of being Buddhist. To date I've only published a full-length article on one such term: vedanā. The vedanā article in Contemporary Buddhism (Attwood 2018) introduced the idea of "Humpty Dumpty linguistics" to Buddhist Studies (though with nods to Lewis Carroll, Ludwig Wittgenstein and others who first described these cases). Most linguists in our field are fully committed to the semantic paradigm in which meaning is inherent in morphemes.
If we take this approach with vedanā however, we learn that the word comes from the causative root √ved "cause to know", from √vid "to know, understand, learn, be acquainted with, etc". The -ana suffix is used for actions nouns, that is nouns that name actions. So vedana means something like "that which causes knowledge", or as Monier-Williams defines it: "announcing, proclaiming, making known". As with a number of other Buddhist technical terms, the word is then used in the feminine gender vedanā, presumably to mark it as a technical term (as far as I know, this feature of the Buddhist lexicon has yet to be studied).
To be clear, a noun in Sanskrit cannot change its gender except when it is the second member of an adjectival compound, in which case it takes the case, gender, and number of the noun (or pronoun) it describes. Only adjectives routinely change their gender. Thus the existence of a form like vedanā is hard to explain using etymology and semantics. Wittgenstein pointed out:
This passage is often condensed into "meaning is use", which is not a bad rule of thumb despite the impression compared to what he actually said. Vedanā is a case in point. Buddhists use it to mean "the positive and negative hedonic responses to the appearance (rūpa) of sensory experiences." And this is completely unrelated to its etymology. Translators have long argued whether these hedonic responses constitute "feelings" or "sensations", but really they are neither, they are hedonic responses, i.e. the judgement that something experienced is pleasant or unpleasant. Neuroscience has a term for this, i.e. valence. "Valence" is itself and example of Humpty Dumpty linguistics. The etymological sense is "strength, strong, etc"; from which we might take it to refer to "that which stands out". The use here also seems to draw on the chemistry sense of "capacity to form combinations": atoms that gain valence electrons are "electronegative" and those that lose them are "electropositive". And the amount of electro-positivity or -negativity is called the atom's "valency". Terms like "ferrous" and "ferric" for iron compounds reflect the different valencies of iron atoms. Incidentally, what do you call a load of Fe2+ ions in a circle? A ferrous wheel. (About all I remember from 2nd year inorganic chemistry).
In what follows, I take a pragmatic approach, informed by my epistemic reading of Buddhists texts generally, and argue for a new approach to translating the skandhas. I pay attention both to pragmatic and prosodic factors rather than merely relying on semantics and the etymological fallacy.
Conze’s translation of skandha as “heap” makes no sense, even as a metaphor, though it does seem to have some roots in later Buddhism. I have never found the standard translation—“aggregate”—helpful either. An aggregate (singular) is a loose collection of similar parts with no structure. One skandha is not an aggregate, and all together simply cannot be "the aggregates" (plural). In other words, the usual translations are incoherent. Conze was a great one for saying that logic had no place in Prajñāpāramitā, an attitude he developed at least ten years before he learned Sanskrit, as a graduate student in Germany ca 1928-1932. But he was wrong about logic generally, wrong about the role of logic in Buddhism, and wrong about the presence of logic-defying contradictions in Prajñāpāramitā texts. Rather, Mr Conze simply misunderstood the texts.
I've dug into both the etymology and use of skandha previously (e.g., Pañca-skandha: Etymology and Dynamics 2013) and concluded that the main reference is to "branching", though this is debatable, it suits my purposes very well. In 2021, I did a series of essays on the khandhas in Pāli according to Sue Hamilton (2000) and Tilmann Vetter (2000), the two most extensive surveys of the idea of the skandhas in Pāli, summed up here: Skandha 2021; individual essays begin here: Modern Interpretations of the Khandhas: Intro and Rūpa (2020).
Sue Hamilton (2000) refers to the skandhas as the “apparatus of experience”, which I think is a useful way of thinking about them. However, a detailed comparison of Hamilton (2000) and Vetter (2000),revealed one main weakness in both accounts: both place entirely too much emphasis on the Khajjanīya Sutta (SN 22.79) which turns out to be misleading.
rūpa is to the eye as sound is to the ear
Rūpa cannot mean “matter” for example, though it is frequently translated that way. Nor, contra both Hamilton and Vetter, can it mean “body”. Rūpa is to the eye as sound is to the ear. That is to say, rūpa does not refer to substance, it refers to outward appearance, to how things appear. In the Khajjanīya Sutta, rūpa is glossed as related to ruppati “it destroys” but I discovered a passage in Sanskrit that suggests this is simply a mistake. The noun rūpa is completely unrelated to the verbal root √rup “destroy, harm”. In a similar passage in Aṣṭa, the verb is rūpayati which is a denominative verb (e.g. like the verb medalling “to receive a medal”). Rūpa means “appearance” and rūpayati means “to appear”. And rūpa-skandha is a metonym for the general appearance of any sensory experience in the sensorium. That is to say rūpa reflects coming into sensory contact with an object: light hitting the eye, sound waves hitting the ear, chemicals wafting into the nose, etc. This is what kicks off sensory experience according to early Buddhist texts.
Our immediate response is hedonic, we enjoy the appearance or we don’t. Traditionally, as I said above, vedanā is the positive or negative hedonic response to sensory experience. Although it is less well known that some of the preferred translations, this concept actually corresponds very closely to what neuroscientists, such as Lisa Feldman-Barrett (2017) calls "valence". Valence means precisely the positive and negative hedonic response to sensory experience.
Saṃjñā is typically translated as "perception", but we can already see that this cannot be right. We must already have perceived an object in order to experience it, and in vedanā we are already experiencing it. In ordinary Sanskrit, saṃjñā is used in the sense of "designation" or "name". One of the main senses of the word is “to acknowledge or recognise” something. What we recognise at this point is the experience itself. This is where we discern the sui generis characteristics of the experience and put a name to the experience. Sui generis is more or less identical with the Sanskrit term svabhāva, at least as used in early Buddhist and Abhidharma literature. It was Nāgārjuna who introduced the idea that svabhāva means autopoietic, i.e. self-creating, the (faintly ridiculous) idea that something can be a condition for its own existence. Nāgārjuna insists that for something to be real, it must have svabhāva qua autopoiesis. Since nothing has or can have svabhāva in this sense, nothing is real. And hence many Buddhists (rightly) saw Madhyamaka as nihilistic.
Saṃskāra is a borrowed word and we get a sense of how Buddhists used it from looking at the original context. In Brahmanical religions, saṃskāra denotes a rite of passage: birth, death, marriage, first born son, etc. During such rites, the priests carry out specific ritual actions (karman). Thus a saṃskāra is "an occasion for performing karma". And this is why we say that samskāra is linked to volitions and explained by various types of cetanā "intention".
Finally, the one thing that vijñāna absolutely cannot mean is “consciousness” since there is no parallel concept in Buddhism because Buddhists resisted reifying sensory experience. Rather I take vijñāna to suggest that we discern the sensory experience as related to an object. This is the final stage in the objectification of experience. Something appears in our sensorium, we have a hedonic response, we recognise the experience and put a name to it, our hedonic responses drive karmic actions (those that contribute to rebirth), and finally we identify the object itself.
The skandhas, then, refer to a process of objectification of experience. This is how Iron Age Buddhists thought that humans processed sensory experience. The word itself probably means something like "branch" and the pañca skandhāḥ are "the five branches of experience". Individually the branches refer to
• Rūpa = appearance
• Vedanā = valence
• Saṃjñā = recognition [of the experience]
• Saṃskāra = volition, i.e. an opportunity for karma
• Vijñāna = discrimination of the object one is perceiving.
And this account is far more coherent than any other I have come across. Moreover, properly contextualised by the absence of sensory experience it helps to explain Buddhist approaches to meditation and insight. This helps explain, for example, why withdrawing attention from sensory experience leads to an altered mental state in which we do not objectify experience.
As scholars and Buddhists both, we have to keep in mind that this is an Iron Age account of human perception. We live more than twenty centuries after it was current and we know a great deal more about this process now.
That said, the framework retains some usefulness for Buddhists as a framework for reflecting on the nature of sensory experience. By identifying such aspects in experience and noting that experience all has the same nature: i.e. experience is ephemeral, compared to the absence of experience it is unsatisfactory, and within experience, no entity (no thing) is to be found.
At the risk of flogging a dead horse, I have to insist that the absence of sense experience in samādhi is essential for contextualising Buddhist ideas. Moreover it is the metaphysically reticent accounts of this that are crucial: samādhi tells us nothing about reality, except that it allows for sapient beings to cut themselves off from sensory experience, ride out the effects of sensory deprivations, and arrive at a state of absence, cessation, extinction, etc. Without this perspective we are bound to come to the wrong conclusions about what Buddhists were getting at.
Of course a good deal of modern Buddhism completely lacks this perspective. Theravādins, for example, completely gave up on awakening, despite preserving instructions for how to attain it. They eventually abandoned meditation in favour of dry analysis of mental states. Traditions of awakening continued to exist in Mahāyāna Buddhist milieus however. Absence was still cultivated and still occurred in some meditators leading to traditions of "non-dual awareness".
Attwood, J. (2018). "Defining Vedanā: Through the Looking Glass." Contemporary Buddhism, 18(3): 31-46. https://doi.org/10.1080/14639947.2018.1450959
Wittgenstein, L. 1967. Philosophical Investigations (3rd Ed). Basil Blackwell, 1986.
09 September 2022
On the Historicity of the Buddha in the Absence of Historical Evidence
I recently posted an appreciation of David Drewes' recent IABS conference presentation on the historicity of the Buddha to a Triratna Buddhist Order forum and got bushwhacked by a couple of traditionalists who both have PhDs. Let me tell you that PhD-level trolling is something else entirely and it did my head in for a while. Worse, Drewes (whom I admire greatly) was targeted by these doctors for ad hominem slurs based on strawman arguments, and I was tarred with the same brush. The insult du jour is "positivist": which is what they call anyone who asks for evidence for an assertion that we all know is not supported by any evidence. It was one of the most spectacular examples of patriarchal white male gatekeeping I've seen in a while.
One of the things I noted was that arguments for the historicity of the Buddha take much the same form as arguments for the existence of God. I could see that one of the good doctors was in favour of the ontological argument, for example. I thought it might be interesting to see how these arguments work. But let me begin by stating the problem.
The figure of the Buddha is ubiquitous in the Pāli suttas. We may glean all kinds of information about him from reading the Pāli suttas and their counterparts in Gāndhārī and translations into Sanskrit and Chinese. What we cannot do is definitely link the Buddha with any historical event or fact. There is no archaeology of the Buddha, for example. There are no contemporary coins or artworks that feature his image or symbol. There are no inscriptions or texts. There are no mentions of the Buddha or even early Buddhism in the texts of other (non-Buddhist) communities. Moreover, it turns out that no figure from the Pāli suttas, including the kings, can be linked to any historical evidence. The kings named in Pāli do not appear, for example, in the old Purānic lists of kings that do include Asoka. Worse, there are two different biographies of the Buddha in the Pāli suttas that disagree about substantive details.
And this is a problem for academic historians. That is, it is a problem for those whose job is to produce and teach objective accounts of history if there is no objective evidence to draw on. If there is no evidence from which to construct an objective narrative, academic historians are bound to say nothing or to mark anything they do say as speculation. Academic historians are not barred from speculation, but they cannot treat speculation as a form of knowledge. When we speculate that the Buddha was a real person this does not imply that we know this. Rather, if speculation is all we have, then we don't know. And if someone makes a claim to knowledge, this begs the question: How does that person know?
So at present, academic historians in Buddhist Studies have a problem in that they are tacitly taking speculation as knowledge. This is not necessarily a problem for anyone else. Religieux tell stories about the Buddha for reasons other than composing and teaching objective history. We tell stories to inspire, edify, affirm, and indoctrinate the audience with the views of our religion. The historicity of the Buddha is not generally speaking a problem for religious believers, because they simply believe without objective evidence. Like every other religious person on the planet believes what they believe.
The best we can do with objective history of the beginnings of Buddhism is locate the stories in cities that we do know existed. I have wandered through the ruins of Sāvatthī and Rājagaha, for example. They were real cities. And archaeology tells us that these city states began to emerge around seventh century BCE. We know what kind of pottery they made and we can contrast it with the contemporary pottery of the Brahmins living in Punjab. This tells us something about the cultures involved but not about any individual in those cultures.
That is to say, it is not that we lack any contemporary archaeological evidence. In fact, we have a good deal of evidence, it's just that it does not mention or even indirectly refer to the Buddha in any way. It is as though the cities are real but the people in the stories are not. It's easy to imagine why a storyteller might adopt this device of setting mythic stories in real places. In a feudal age where kings had absolute authority, it would not do to portray them in a poor light because they might just kill you (entirely legally). Moreover, by the time of Asoka, because of the rising power of monarchs, the Buddhist community had become dependent on royal patronage in addition to the support of wealthy merchants.
The first historical person in Indian history is Asoka. We can link Asoka to any number of historical facts and figures: inscriptions, art, architecture, mentions in foreign literature, and links with kings of bactria who dates are well attested. Either of Charles Allen's (popular history) books The Buddha and the Sahibs or Ashoka contain good outlines of this evidence and how it was discovered (the two books overlap substantially in content).
By contrast the stories about the Buddha all have a strongly religious character. They almost always include some supernatural element, a feature that intensifies in texts from later periods. A figure whose main features include supernatural powers is difficult to locate in an objective historical narrative, since objectively there are no supernatural powers. Objectivity is not neutral. No objective history includes accounts of supernatural powers because such powers are a product of the religious imagination.
Though most people believe that the Buddha existed, Drewes argues that academic historians are bound to use a higher evidential bar, and all things considered the Buddha does not meet that bar. As a result Drewes argues that academic historians should not continue to speak of the Buddha as an historical person. He is a figure of myth and legend.
Drewes is specific about who his target audience is: it is academic historians. It is not Buddhists per se, except where they are also academic historians, which is quite often in Buddhist Studies. So having established this, let's look at how Buddhists argue for the historicity of the Buddha, using a framework I've cribbed from a popular philosophy book (i.e. 50 Philosophy Ideas You Really Need to Know by Ben Dupré).
The Teleological Argument (or Argument from Design)
In this approach, the theologian argues that the "beauty, order, complexity, and apparent purpose" observed in the world cannot have come about by chance. Some mind or intelligent force had to shape things to make them so perfect. And in our case that intelligent force was the Buddha.
In 1802, the theologian William Paley used the phrase "the divine watchmaker" to reflect a mechanistic view of this argument. It was this that gave Richard Dawkins the idea of referring to evolution by natural selection as "the blind watchmaker". But any view of evolution with a "watchmaker" in it is teleological. There is no watchmaker. The "watch" makes and remakes itself in this case, by evolving according to patterns that seem to be properties of the universe.
Applied to the Pāli suttas we see this argument at various levels of sophistication. The most brute form of this is "The Pāli suttas exist, therefore the Buddha exists". A more sophisticated version says that the stories are too complex, too connected by an "underlying unity", too realistic, for the Buddha not to have been an historical person.
As one of my doctorate-holding detractors said, "Why go to all that trouble if the Buddha wasn't real?" This simply begs the question, "Why do religions create and transmit religious stories at all?" This is not a hard question to answer.
We use stories, images, and symbols because people relate more strongly to stories with people in them. They also relate strongly to what Justin L Barrett (2004) calls minimally counterintuitive elements, like animals playing the parts of people or supernatural powers. Indeed, research cited by Barrett seems to show that embedding one's message in a story with minimally counterintuitive elements makes it more memorable. So a Buddha with supernatural powers occupies our minds more strongly that a Buddha without them. Just as a talking wolf is what makes the story of Little Red Riding-Hood so memorable and so useful as a warning against naïveté.
We tell stories, including religious stories, to communicate values, attitudes, and ideas. And we use storytelling devices to reinforce the message. We think of the narrative arc or structure, characterisation, world-building, and so on. The best stories combine the best of each element. There is no doubt, for example, that the Buddha we meet in Pāli is a compelling character, even if the prose is generally turgid and repetitive. The settings of the stories do a good job of world building. And so on.
The problem is that no evidence exists outside of the stories that supports the idea of an historical Buddha. Which may be fine for believers, but we are considering the position of the academic historian.
We might ask, for example, if can we imagine this body of literature emerging and taking the form that it does, in the absence of a human founder of Buddhism. And I have no problem at all imagining this. However, I cannot conclude from this that I know that the Buddha did not exist. On the contrary, I am admitting my ignorance: I don't know if the Buddha was an actual person or not. And this is my official position on the matter unless and until more evidence emerges.
Still, if I don't know then, unless you have better evidence than I have access to, then you don't know either. And if you have new evidence then, as an academic historian you are bound to publish it in order to be taken seriously. As of today (9 Sept 2022) no such evidence has been published. Academic historians do not know if the Buddha was a real person. No one knows.
A body of literature was surely shaped by some human mind or minds. But it need not have been the Buddha. Humans have been telling mythic stories for as long as we have had language, which is likely in the order of 200,000 years (On the antiquity of human mythology see Witzel 2012). But the early Buddhist texts are very pluralistic and are clearly shaped by more than one mind. Below I will discuss the hidden (in plain sight) pluralism of dependent arising. Now let us looks at some of the main arguments that theologians have tried for the existence of God and how Buddhists use similar arguments.
The Cosmological Argument
The cosmological argument in its simplest form is that "Nothing can come from nothing". Everything is caused by something other than itself (autopoiesis is just as forbidden for European intellectuals as it was for Nāgārjuna).
This is a form of argument that we see a lot in Buddhism because of our emphasis on phenomena having necessary conditions. The logic follows from the Buddhist axiom that "things arise in dependence on conditions". The trick is what we mean by "things". There is no doubt that the majority of contemporary Buddhists mean "everything" by this, indeed "every possible thing". For modern Buddhists, dependent arising is their theory of everything. As Evitar Shulman has said, there is no reason to believe that early Buddhists intended this explanation to go further than mental activity or that they saw it as a theory of everything. Many historians of Buddhist ideas now believe that the received interpretation came along substantially later. What we see in the early texts is not this metaphysical speculation, but a rather smaller epistemic claim: all mental phenomena arise in dependence on conditions. And the main condition is attention. Withdraw attention and sensory experience ceases. And then life starts to get interesting.
One form of this argument—everything happens for a reason—is known as the teleological fallacy.
Reasons are ideas or propositions evinced by humans to explain their actions in terms of internal states such as motivations, desires, etc or external circumstances such as per pressure, coercion, etc. As Mercier and Sperber (2016) have argued, reasons qua explanations of actions, are entirely post-hoc. Careful study of reasons and reasoning shows that our decisions are mainly driven by unconscious inferences, and then consciously justified only in retrospect. And reasons are subject to all the usual biases and fallacies. For example, we tend to settle on the first plausible reason that comes into our mind (anchoring bias). We tend seek confirmation of our stated reason, rejecting any counterfactual information (confirmation bias). And so on.
Outside of human and animal behaviour it is not even true to say that everything that happens can be traced to a cause. Causation is tricky, especially after David Hume (1711–1776), who pointed out that we never observe causation per se, we only ever observe sequences of events. "Causation" seems to be a structure that we impose on experience to make sense of it rather than a feature of reality. Immanuel Kant (1724–1804) developed this idea by showing that metaphysics generally are imposed on experience by us, rather than emerging from within experience.
Against this is our everyday experience of causing things to happen by desiring them to happen. As John Searle (1932– ) is fond of saying "I think about my arm going up, and the damn thing goes up" (always accompanied by the appropriate action). That is the archetype of causation for human beings. Although philosophers often prefer to discuss causation in the abstract, I think this is both a red herring and an intellectual cul de sac. That said, our experience of causing things doesn't generalise to a theory of causation. Physical processes don't involve an agent having a desire. A rock rolling down a hill follows the applicable physical laws, but it has no agency. It cannot chose not to roll down hill, for example. A rock rolling down a hill is simply following inherent patterns of the evolution of matter and energy over time. There is, at the very least, an epistemic distinction between agent driven change and non-agentive change. They follow different patterns that we are pretty good at distinguishing.
Moreover where we have been able to identify non-agentive patterns of change, which were known as Laws by nineteenth century natural philosophers, they don't include the concept of causation. When we examine classical laws of motion or laws of thermodynamics, for example, there is no term that indicates "causation". When we see a classical law like F=ma we assume or intuit that the force causes the acceleration, but this is not the case. Rather it tells us how to calculate the magnitude and direction of the force having observed an accelerating mass. It does not tell us anything about causation. Forces do affect how matter behaves, but the idea of causation is just that, an idea. An idea we project onto the situation, when it fact it only exists in our minds.
The cosmological argument for the Buddha goes like this. The Pāli suttas exist, therefore they must have had a cause. For Buddhists that cause is assumed to be the Buddha. Since the Pāli suttas exist, the Buddha must have caused them existed. According to this view, if Buddhism is not the product of the Buddha, then it is incoherent. One has to be careful here, because there is much about the Pāli literature than is incoherent. One will not find a coherent theory of karma and rebirth, for example. One will find numerous contradictions in the stories. And so on.
There is a further fallacy about the Pāli suttas that contributes to this and other arguments for the Buddha, which is often phrased in terms of "underlying unity". In this view, observers claim to see a uniformity of expression and thought that the suttas must have been conceived by a single mind. That mind was the Buddha's mind, even if the Buddha is not accurately portrayed in the stories. Without the idea of the Buddha, many people apparently struggle to make sense of Buddhism (the many beloved characters of fictional literature notwithstanding).
The absence of evidence often forces those who try the cosmological argument to retreat into a god-of-the-gaps approach. Since the Buddha cannot be found in the evidence, he must exist in the absence of evidence. This stymies any discussion since insisting on the absence of evidence does not refute a god-of-the-gaps argument, because it relies on the absence of evidence. And it becomes rather like trying to have a discussion about anything with a Mādhyamika: pointless.
Aesthetic Arguments
Some Buddhists argue that they don't give a focaccia about history, it just feels right to believe in the Buddha. Or it just "makes sense", i.e. they find it intuitive. This is often followed by a denunciation of reason, reasoning, intellect, or anything other than aesthetic judgement when considering the historicity of the Buddha. The obvious intellectual influence here is Romanticism, i.e. sensibility over sense. Although the English Romantic movement itself was short-lived, the impact on English intellectuals is still profound. In Triratna, for example, Romanticism is sometimes equated with Buddhism without qualification. For those who take this approach, the poems of English Romantic poems appear to have the same status as Pāli suttas. I'm definitely not on board with this. Romanticism is an ideology and the English Romantic poets were a bunch of feckless aristocrats out of the heads on drugs half the time.
Since the evidence for the Buddha is inconclusive, at best, some Buddhist adopt a version of Pascal's wager: all things considered it is best to act as if the Buddha was a real person, because if we are right then we are right and it's all good, but if we are wrong there is still the consolation of acting correctly according to Buddhist norms (which Buddhists hold to be the highest form of morality). The Buddhist argues that it is better to be a Buddhist than not to be. Funny that.
Drewes, however, was talking about academic historians doing academic historiography. As historians we are bound to take the evidence seriously. In the absence of evidence we may speculate, but this has to be sharply distinguished from a claim to knowledge. If we are speculating, then we don't know. As an academic historian, one has to be able to say "We don't know". And in the case of the Buddha, we really just do not know.
Positivism is a particularly rigid idea about what constitutes evidence, usually in relation to the empirical sciences. Positivists are rigidly empirical about evidence: if you can't measure it, it doesn't exist.
The false claim put forward by the two doctors was that Drewes and I were excluding valid evidence on ideological (i.e. positivist) grounds. The evidence we are excluding from objective history is the Pāli suttas themselves. And we are excluding them in particular ways. I have no doubt, for example, that the Pāli suttas reflect the culture in which they were written.
This is completely uncontroversial in the case of the Pāli commentaries. For example, the commentaries construct elaborate family trees for the Buddha and other characters linked to him. But these family trees exhibit a preference for marriage patterns that only exist (in India) amongst Dravidians and their neighbours in Sri Lanka. We see, for example, an emphasis on cross-cousin marriage. A cross-cousin is a first cousin from your parent's sibling of the other gender. So, a Sri Lankan boy might be married to his father's sister's daughter, or to his mother's brother's daughter. Either way, first cousin marriage was considered incest in North India and it is presently illegal to marry a first cousin in India. By contrast in Sri Lanka first cousin marriages are normal, a custom absorbed from Dravidian India, and presently legal. So when the commentaries composed in Sri Lanka make cross-cousin marriage a feature of the Buddha's family, we know that this reflects Sri Lankan culture not the Buddha's culture.
Those who assert that the Buddha is an historical person ought to be prepared to say how they know. But when you ask them this open, perfectly valid, and not at all positivist question, those who assert the existence of the Buddha respond with one or other of the theological arguments outlined above. But none of those arguments holds water for academic historians.
It should be noted that nowhere in mainstream academia, except perhaps in Christian Studies, does any academic accept these arguments applied to the existence of God. And no Buddhist has ever defended these arguments for God, even when they use exactly the same form of argument for the existence of the Buddha. There are differences, of course, since the Buddha can't be held responsible for the problem of evil, for example, despite being routinely referred to "omniscient" (sarvajñā "all knowing") in later texts. Nor is the Buddha is not implicated in the creation of the universe either, though Buddhists still insist on a cyclic universe in blatant contradiction of the facts. We live in a universe that, as far as we know, was created once, and only once, and will exist forever. But still the forms of argument are recognisable.
The supposedly "authentic" texts routinely describe the Buddha in supernatural terms. He reads minds, he converses with gods, he goes to and from the god-realms, he flies, he does miracles, his tongue can cover his face, and so on. These magical elements of his character are only magnified as time goes on. The Buddha of the later hagiographies is far more magical and supernatural than in earlier stories. The plethora of Buddhas that replace Gautama, beginning with Akṣobhya and Amitābha, are almost completely magical and hardly human at all. They exist in other universes and cross the barriers to rescue us (from ourselves) if we only have faith and chant their name. I still have no idea where Bhaisājya Buddha ("the medicine Buddha) comes from or how he works. We have moved well away from Buddhism qua "philosophy", "moral system", or any other bowdlerised European way of talking about it.
As part of their denunciation of Drewes and I, one of the PhDs accused us of being positivist, and I want to circle back to this assertion.
What Kind of Historian am I?
I find it hard to credit that anyone would call me a positivist, though this is not the first time. I mean, just look at how I handle evidence in my history articles. We have to be quite flexible in many cases. I know for example that the Fangshan stele was commissioned on 13 March 661 because an inscribed colophon says so. The positivist might ask what evidence we have to support this date? I mean, the scribe could have been lying, right? We don't know the date of the Fangshan stele except when we assume that the scribe wasn't lying. The positivist would not accept this, but with some caution, I do. Because there are times when it is reasonable to trust the evidence, even as an academic historian.
My approach is roughly speaking Bayesian. I look at all the possibilities based on what I currently know and give each a probability. All possibilities have a non-zero probability. Then I see what more I can learn and use what I've learned to reassess the probabilities. I don't do this formally. I don't, for example, assign numerical values for the probabilities. I weigh them up quite intuitively, though I'm usually more conscious of deciding which factors I consider salient to the question. I try to adopt the most likely position, but with a mind open to and actively seeking further evidence.
If we are dating the Heart Sutra then we know, for example, that the commonly cited date of 609 CE for the copying of the Hōryūji manuscript is objectively false. This date first appears in a Japanese book published in the 1800s. And it is widely acknowledged amongst academic historians that the book lacks credibility. Moreover, it contradicts more weighty evidence. The script and writing appear to be consistent with the 9th or 10th centuries.
Also I have suggested that the Heart Sutra was composed after 654 CE, based on the assumption that Xīn jīng copied the dhāraṇī from Tuóluóní jí jīng 《陀羅尼集經》 (T 901). This text was translated by Atikūṭa in ca. 654 CE. It didn't arrive in China until ca 651 CE. Since the Xīn jīng has apparently copied the dhāraṇī in Chinese rather than Sanskrit we may conjecture that it was composed after 654 CE. I don't know this. But I think it is the most likely scenario given the evidence. It is of a piece with better established facts that I have discussed in my publications. No positivist would give this the time of day.
Based on the present state of our knowledge, the Heart Sutra simply could not have existed in 609 CE and the Hōryūji manuscript itself is highly unlikely to be from that date.
Now this evidence is vague and my conclusions provisional. I'm proposing what seems like the most likely scenario, given the evidence. Where the evidence is vague or ambiguous discussion may ensue about which is the better interpretation of it. And in these circumstances we may expect historians to wade in and express opinions, but not to express their opinions as a kind of knowledge. The only escape from (typically ego-driven) opposition of opinion is to find and write about new evidence. Which is what I have been doing to the Heart Sutra for 10 years now.
There is little point arguing about the existence of the Buddha until new evidence arrives. We've seen all the theological arguments for interpreting the texts as being the product of one person, but most academic historians find this far-fetched at best.
And so on. No one who took the time to read my historical scholarship could rightly accuse me of being a positivist. I'm far more flexible than that. I do try to be clear about how confident I am about various claims to knowledge, and in each case I have published the extensive arguments for what I take to be the case. Unlike some of my interlocutors, I don't make unsubstantiated claims in my published work and I do raise many still unanswered questions. I may indulge in more speculation informally, but the argument here is about academic historiography and, given that, I'd prefer to be judged on my publications in academic journals than on work completed under less rigorous conditions.
If you are going to accuse me of intellectual bad faith then you had better have a bit more on your side than not liking me or not liking my conclusions. You better not be promoting religious claptrap on the side.
Objectivity is Not Neutral.
Modern academic historians, even the non-positivists, strive towards more objective accounts of history. At the same time we still argue about what "objective" means. I take it to mean that which is the same for all observers. Even then, seeing the objective requires clearing away the subjective, which we do by comparing notes (which is why scholarship is necessarily a dialogue).
One of the reasons history is so often about famous people and battles, about dates and numbers is that the objectivity of these can be confirmed with reference to multiple sources. Ancient history presents increasing problems as we go back in time because evidence simply no longer exists. Ancient written records, especially religious tracts are, generally speaking, highly unreliable historical sources, as any number of academic historians have said and continue to say.
These days the only people producing tracts with titles like "The authenticity of the Pāli Suttas" are Theravādin bhikkhus and their academic allies. I once upset Sujato by referring to him, in passing, as a Theravāda apologist, though this was some years before he and Brahmali published the apologetic tract just mentioned. Bhikkhus submit to the Vinaya (an Iron Age code of monastic etiquette) and notably take a life-long vow to refrain from all sexual activity. No one who is attempting to live such a vow can be objective about the circumstances in which the vow makes sense. Because, for most of us, monastic chastity makes no sense and has been demonstrably harmful. Yes? Having strong, lifelong commitments, that in turn shape one's role and status in one's community and beyond, makes it hard to be objective. Because if being objective disproves some basis on which your commitment is based, then you are in real danger of losing that role and that status.
An historical Buddha seems intuitive to a Buddhist who has spent decades talking about the Buddha as a special kind of person (a magical person, though perhaps not quite a god). Of course, the familiar seems intuitive to the person immersed in it. What always seems counter-intuitive is the new and novel. The sensibilities of Buddhists, therefore, have to be eliminated from consideration of academic history. We fully expect Buddhists to believe in the Buddha, but that belief is not evidence for the Buddha anymore than Christian faith is evidence for the existence of God.
The Buddhist anxiety about issues of legitimacy and authenticity seems quite universal. We see it in the earliest texts in which Buddhism is apparently a heterodox view that has to be carefully distinguished from other contemporary forms of religious asceticism. Buddhists were also at pains to insist that Buddhist methods were distinct from those of other religions, though there is some evidence to suggest that Buddhists inherited existing meditation techniques and modified them precisely to make such a distinction. Hence the complex position that we see in Pāli suttas on the respective jhāna and āyatana meditations and the weird combination of them both in some places.
The much vaunted "underlying unity" is clearly a figment of the imagination. And if you want a demonstration then I suggest looking into the various formulations of the nidānas. Here is the diagram I made when I was studying them:
I count seven distinct formulations of the nidānas, sometimes using completely different terminology. The underlying unity here seems to be "one thing leads to another" and I doubt even the most ardent Buddhist theologian would claim that this idea was profound or only found in Buddhism. Back in 2011, I did a blog on many historical examples of the idea that everything changes. A completely ubiquitous idea across cultures that owes nothing to Buddhism. It's just that Buddhists also noticed this thing that everyone notices eventually (getting older makes this a lot more clear).
And this is the norm. What we see in Pāli suttas is a unevenly imposed uniformity that barely hides a pluralistic past in which Buddhists believed a much broader range than can be accounted for in traditionalist approaches, including the modern Theravāda.
As I was thinking about this and scanning the historical literature I came across some academic accounts of why arguments are inherently adversarial. The problem according to Howes and Hundleby (2021), is that beliefs are not something we choose. Beliefs are involuntary. And this means, that whenever a believer enters into an argument they risk a belief-changing event and this makes for a certain kind of vulnerability.
This is interesting, because if true, it explains why Buddhists tend to be so vicious in debate (and my goodness Buddhists can be extremely vicious if their beliefs are challenged). Just being in a debate, they risk losing their faith and they fight as if that would be the end of the world. For example, a Theravāda bhikkhu with both institutional and ecclesiastical titles and privileges could lose both if they stopped believing. Even for a rank and file Buddhist, loss of faith might result in social isolation and loss of status. For a social primate these are very high stakes indeed.
By inadvertently starting an argument about the historicity of the Buddha with true believers (PhD's notwithstanding) I accidentally triggered that sense of vulnerability that all religieux have. Ironically,
Drewes, David. (2017). "The Idea of the Historical Buddha". JIABS 40: 1-25.DOI: 10.2143/JIABS.40.0.3269003
——. (2022). "The Buddha and the Buddhism That Never Was". XIXth Congress of IABS, Seoul, August 2022.
Howes, M., and Hundleby, C. (2021). "Adversarial Argument, Belief Change, and Vulnerability." Topoi 40, 859–872. https://doi.org/10.1007/s11245-021-09769-8
Mercier, Hugo and Sperber, Dan. (2017). The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.
Witzel, E. J. Michael. (2012). The Origins of the World's Mythologies. Oxford University Press.
02 September 2022
Some Notes on Cessation and Prajñāpāramitā.
My thirteenth article on the Heart Sutra has been published.
(2022) "The Cessation of Sensory Experience and Prajñāpāramitā Philosophy" International Journal of Buddhist Thought and Culture 32(1):111-148. IJBTC Website. [free download]. Academia.edu
In this article I directly address the philosophy of Prajñāpāramitā as it occurs in Prajñāpāramitā texts for the first time (for me, and probably for you too). I'm not the first to attempt to explain Prajñāpāramitā by any means. That said, these days I'm operating in an entirely different paradigm to scholars like Edward Conze or Linnart Mäll, or religious leaders like the Dalai Lama and Thich Nhat Hanh. I never did fully accept the metaphysical speculations that surround this genre, which always sounded screwy to me, but now I know there is a better alternative. As usual, I rely a great deal on pioneering work by Sue Hamilton, Jan Nattier, and Matthew Orsborn (aka Huifeng).
Hamilton (2000) explores an epistemic reading of early Buddhism, notably the khandhas. She shows that it is far more coherent to think of the Buddha as being concerned with experience rather than with reality. Indeed there is no Pāli word that corresponds with our concept "reality" and few if any texts that discuss reality or the nature of reality. What the Pāli suttas mainly discuss, amidst all the myth and miracles, is sensory experience and in particular the cessation of experience during meditation. That sensory experience can cease without loss of consciousness is the key discovery that sets Indian religion and philosophy apart. A great deal of Indian religion seems to me to be bound up with the implications of this discovery.
Nattier (1992) showed that the text was composed in Chinese, and both Huifeng and I have independently confirmed this by showing that the patterns she observed in the core passage can be seen throughout the Heart Sutra. Huifeng (2014) was the first to notice certain mistakes in the Sanskrit text that have contributed to our misreading of the Chinese Xīn jīng «心經». He noted at the time that the corrected text points to the need for an epistemic reading if the Heart Sutra. In 2015, I published the first of a series of articles pointing out long-standing, but unrelated mistakes in Conze's critical edition of the Prajñāpāramitāhṛdaya. Between us, we ought to have created enough doubt to suggest the need for a reappraisal of Prajñāpāramitā philosophy.
This essay is a kind of supplement to the published article, with more background information. I begin with some history.
Some History
At around the time that city states were emerging on the central Gaṅgā Valley floodplains, new religions or Dharmas were emerging in the region: theistic Brahmanism, Sāṃkhya, Jainism, Ajivaka-ism, and of course Buddhism. And these appear against a backdrop of local animistic religions from which Buddhism got yakkhas, tree-spirits, and other non-human (amanussa) beings. Archaeologists tell us the new cities begin to appear in the sixth century BCE. The cities are mainly kingdoms and several of them are characterised by imperialism and military conquest. The Moriya dynasty of Rājagaha and Paṭaliputta went on to spawn a subcontinent spanning empire in the third century BCE. Of the ancient cities from that time, only Varanasi (Pāli: Kāsī) has been continuously occupied.
Incidentally, although it is de rigueur to give historic names in Sanskrit, the practice is incoherent. Almost no one outside of the Punjab spoke Sanskrit at that time. The other thing that emerged at this time were the Middle Indic (or Prakrit) languages, the everyday speech of people in those regions was not the Old Indic saṃskṛtabhāṣya recorded by Pāṇini. The new vernacular languages probably don't derive directly from the language of the Brahmins either, since that was only one form of Old Indic and preserved only within a hermetic community of Brahmins. In particular, there can be no suggestion that Lāja Piyadasi, aka King Asoka, ever spoke or used Sanskrit in any way. It is anachronistic to refer to him in Sanskrit as Aśoka (or Ashoka).
I have speculated (Attwood 2012), based some informal comments by Michael Witzel, that one catalyst for the social transformation that resulted in city and Prakrits emerge was the arrival of small groups of people (including the Vajji, Mallas, Kāmālas, and Sakkas) who initially migrated into India from Persia (bringing with them some Persian ideas and customs a few of which were incorporated into Buddhism). After a dry spell, they moved into the interior, avoiding the Brahmin territories to the north, and settled on the margins of the emerging city states in the Gaṇgā Valley where they took up the patterns of life that we see depicted in Pāli stories.
We have little reliable evidence for this period but it seems likely, from texts like the Bṛhadāraṇyaka Upaniṣad and the Ariyapariyesanā Sutta, that meditation in the sense of withdrawing attention from sensory experience was discovered by a group of migrant Brahmins living around the city of Kosala who were experimenting with visualising rituals, rather than acting them out (sometimes called the "interiorisation of ritual").
However it happened, the early hagiographies of the Buddha show him learning how to meditate from non-Buddhist teachers whose attainment of the āyatana states are consistent with attention-withdrawal being their main technique and who are distinguished only by how far they got with it. Buddhists, especially the Theravāda sect, were at pains to show the Buddha breaking away from his early teachers and finding his own technique, which we now refer to as jhāna (Skt dhyāna). But there are also suttas in Pāli, notably the Cūḷasuññata Sutta (MN 121), that show Buddhists still doing the older style of meditation in which one withdraws attention and reflects on the absence of sensory experience that results from this. The persistence of this thread in Buddhism in the Buddhist canon is all the more interesting when we consider that it went against the flow of Buddhist orthodoxy, which at that time was rapidly moving towards focus on Vinaya and Abhidharma. In this sense we can think of Prajñāpāramitā as an innovative literary form emerging from a conservative community of meditators.
Learning to withdraw attention from sensory experience can be fascinating. Not least because it is functionally identical to sensory deprivation and has the same side effects, i.e. visual, aural, and somatic hallucinations. Experienced meditation teachers tell us that the weird sensations, lights, and even sounds that we encounter in our minds when we first learn to meditate are not significant. However, as sensory deprivation intensifies we may have more vivid hallucinations with a hyperreal quality that very often are judged to be significant. We tend to call these types of hallucinations "visions" and attribute a heightened meaning to them. Many meditators feel that their "visions" have revealed an ineffable truth about the universe to them. As yet there seem to be no scientific studies of the role that sensory deprivation and consequent hallucinations play in Buddhist meditation (I've dropped hints with some of the leading neuroscientists via Twitter: look up people like Karin Matko, Heleen Slagter, Thomas Metzinger, Ruben Laukkonen, etc).
Ancient texts like the Cūḷasuññatā Sutta tell us that beyond all this foam of ephemeral sensory experience there is a state (variously deeper or higher depending on preferred cognitive metaphors) in which all sensory experience has ceased (nirodha), is extinguished (nirvāṇa), or absent (śūnya). I speculate that after emerging amongst Brahmins in the Kosala region, these techniques were taken up by all the religions of Second Urbanisation India. People of those various religions were all practicing attention withdrawal but (then as now) interpreting the results differently according to their own doctrines.
The Buddhist explanation of the absence and presence of sensory experience became the dependent arising doctrine, which some Buddhists sought to make a theory of everything. In this view, sensory experience arises dependent on the presence of conditions (imasmin sati idaṃ hoti), one of the main conditions being "attention" (manasikāra). In manasikāra, the kāra refers to "a maker" and manasi is manas "mind" in the locative case. In English we naturally want to read this as "in the mind", but I'm a little doubtful about whether ancient Buddhists had the cognitive metaphor: MIND IS A CONTAINER (see The 'Mind as Container' Metaphor 27 Jul 2012). In translation the locative typically becomes the prepositions "in, on, at, etc," but we can also read it as "with reference to". So manasikāra would be "a maker with respect to the mind". It is apparent that in some contexts words like manas, citta, and vijñāna were seen as interchangeable; while in other contexts they have distinct technical meanings. We typically take this context to be temporal, with technical terms emerging relatively "later" than undifferentiated forms. But this is a presupposition and as far as I know there is no evidence external to the texts that could corroborate this. Such differences need not be temporal at all. They might be sectarian, for example, or geographical. We really don't know.
A Digression on Causality and Proximity
I'm sometimes chided by orthodox Buddhists for saying that dependent arising implies the presence of the condition; a view on this that I notably share with Anālayo (2021). A prominent Theravāda scholar and journal editor once insisted that the formula only requires the existence of the condition. At the time, I was flummoxed by this but found it difficult to articulate why.
In modern arguments about causality (which is more rigorous than mere conditionality) physical proximity (or locality) is required for causation. Causation or action at a distance is a deeply problematic idea. Where we see apparent action at a distance, such as magnetic attraction, we always find some intervening medium (the electromagnetic field) or an alternative explanation (gravity is not a force, but an effect of the geometry of spacetime). Most modern scientists and philosophers would question whether any action at a distance is possible on the macro-scale that Buddhism deal with. There is an exception for nanoscale at which is seems that locality may be up for grabs. Causation, as far as any Iron Age Buddhist could have understood it, at a minimum requires the cause to be in the same physical location as that which it acts on, or immediately physically proximate to it. This is not only a logical necessity, but is also implied by the grammar of the Pāli formula of dependent arising.
So, I can now more confidently insist that the dependent arising formula states that a condition must be present for an effect to arise. It's existence is insufficient if, for example, the condition existed on the other side of the planet at the bottom of the ocean, then there is no possibility of it causing an effect here in my house.
From Experience to Reality
Causality is a tricky topic (especially if we are trying to understand an Iron Age worldview), but it is easy compared to "reality". The word is used so vaguely and ambiguously that sometimes it hardly seems to mean anything. Defining "reality" is next to impossible without invoking some other metaphysical quality. For example, we might say that reality is that which exists. But what does it mean to exist? Philosophers are still arguing about this one.
In my view, to be "real" is to have some observable quality that is, or some qualities that are, independent of any particular observer or their beliefs. It is entirely possible that some real things cannot ever be observed by us. About such things we know nothing and at this stage we likely never will. Many things that might be observed have not been. Think of bacteria which existed for billions of years, but were first observed in the eighteenth century.
For those aspects of reality that are apparent, all observers agree on some ontologically objective facts. For example, gravity on earth is experienced as an acceleration of 9.8 ± 0.03 m/s2 towards the centre of the planet, and everyone who measures it accurately gets a value in that range. Variations can be explained by the inherent measurement error, and the thickness and density of the earth's crust at the point of measurement (the oblate-spheroid shape of the planetis a factor in this). Gravity is not a matter of opinion. It is not produced by each person individually. Gravity is a fact that transcends the observer. How we explain the universality of gravity depends on the context.
Those who argue that the material world is an illusion or is generated by the mind, have no interest in explaining a phenomenon such as gravity. It's just part of the "illusion". Illusion and related words are often bandied about in this context. We often see clickbait headlines like "Reality is an illusion" or "self is an illusion". But this is not a form of explanation: it does not help us to understand the concepts involved. Even if something is actually an illusion—like "the dress"—simply calling it an illusion leaves open all the important questions.
That said, gravity certainly does not behave like an illusion, it behaves like a "brute fact". Anyone who seriously doubts this could try jumping off a tall building while fervently imagining that they can fly to test their belief (Darwin Awards await).
Some Buddhists are surprised to discover I distinguish experience from reality. They wonder if they not one and the same thing (i.e. they are Idealists). The reasoning is usually along the lines of "mind creates reality". This is a misconception. If mind did create reality, then there would be no reason for everyone to imagine gravity being 9.81 ± 0.03 m/s2. There would be nothing to prevent me from inventing gravity at 5.6 ± 0.3 m/s2. or any arbitrary figure. In the absence of an objective world, what could possibly account for the uniformity and universality of gravity? I've yet to see any convincing explanation of this from an Idealist.
NB the standard figure for gravity is often given with greater precision that the measurement error allows. The standard figure is 9.80665 m/s2 but the variation due to error is on the order of two significant figures (0.03 m/s2), so the standard figure cannot have a precision greater than that, i.e. 9.81 m/s2.
Gravity is just one of many universal quantities that we know of. Others include the mass of a proton, the charge of an electron, and the speed of light in a vacuum. Explaining these from an Idealistic worldview is difficult at best. Universality seems to requires something extrinsic to the observer in order to impose standardisation but how to achieve this in a nonmaterial, idealistic worldview? An objective universe, independent of observers, is far and away the simplest and most elegant solution to shared knowledge and universal constants. Over the last 450 years, scientists have described our universe to an exquisite level of detail, often to 10 or 12 decimal places, so that in terms of our everyday world, we now completely understand the processes involved. On this see these blog posts by Sean Carroll.
The gaps in our understanding of the universe as a whole are huge, but they are at the extremes. The physics of human scales of mass, length, and energy are fully comprehended by the atomic theory of matter and forces. Buddhist idealism is forced to sweep 450 years of science under the carpet and pretend that it is inconsequential compared to what Buddhists say they learn in meditation about the nature of reality.
Early Buddhists didn't explicitly say, but they did imply that they accept the existence of an objective world. An objective world is not a problem for early Buddhist doctrine, or for Prajñāpāramitā, because the focus is on sensory experience and what happens to our minds when we withdraw attention from sensory experience. The nature of the objective world is, at best, secondary to questions about the nature of experience and the meaning and significance of the complete cessation of sensory experience. As long as the nature of reality allows for sensory experience and cessation it doesn't matter what we believe about it. Especially in Iron Age India when it seemed plausible to take nirvāṇa as an analogue of death, so that by attaining the former, we bring the latter to an end. Once rebirth caught on, the end of it became the avowed goal of all known Iron Age Indian religions.
Still, getting from objectively real to objective reality is a much bigger step than most people realise.
From Reality to Myth
My approach to abstract concepts like "reality" is broadly speaking nominalist. In this view, reality is the abstract notion that all real things have something in common that qualifies them as real. This common quality then retrospectively authenticates a phenomena as "real". On a nominalist reading, however, abstractions themselves are not real. Abstractions are ideas that we have about experience. Abstracting a perceived commonality and then retrospectively using that abstraction to define what is "real" is a method that produces nonsense. I noted above that it's very difficult to define reality from first principles. Part of the problem is that "reality" is an abstraction; an idea. And this allows that different people can define reality differently depending on their idea. This also means that a phenomenological account of "reality" is no help: what kind of phenomena is an idea? Ideas are subjective phenomena. So how can a subjective phenomena be used to define something objective?
A further problem we routinely face in Buddhism is that many Buddhists believe in a magical reality over and above "mundane reality". In other words, many Buddhists are openly dualistic about this world (ayaṃ loko) and the world beyond (paraṃ loko). This is typical of all religions that emphasise "life after death". Many Buddhists insist that there is a more real world, or a real world juxtaposed with the world of illusions reflected by sensory experience, waiting for us after death, be it nirvāṇa or a buddhakṣetra. The world of experience is, at best, a poor reflection of a "spiritual" (read "magical") reality beyond. For example, my bête noire, Edward Conze openly argued for a magical [his word] reality existed over and above physical reality. Moreover, he apparently believed this for many years before he ever encountered a Sanskrit text. He managed to shoehorn this view into a Marxist analysis of Aristotle long before he shoehorned it into Prajñāpāramitā.
There is an obvious attraction in the idea of a "world beyond"; a world that has none of the flaws of our world; a world that is not broken, cruel, and merciless; a world in which all of our desires are fulfilled, and so on. One need not labour the point since a better afterlife is the essence of what all religions promise followers. Although it is notable that some early Buddhists stated that their intention was "the end of the world" (lokassa anto).
It's not until the Pure Land texts that we see this idea of a magical reality beyond the "mundane" world begin to take hold in Buddhism. Before this there were better and and worse rebirths, but all rebirth was problematic. Rebirth in a "heaven" (devaloka) only prolongs the inevitable and has no soteriological value. Indeed, some Buddhists say that liberation is only possible from the human realm (manussaloka).
Because there can only be one Buddha at a time (by Buddhists' own definition) and Gautama disappeared from our world when he died. Gautama brought rebirth to an end and his post-mortem status was officially "indeterminate" (avyākṛta). But this was apparently interpreted in some quarters as Gautama abandoning us to our fate. In response to this Buddhists invented alternative universes where living Buddhas could still be found who were willing to "save" us. These Buddhas effectively live forever and would rescue any faithful devotee from saṃsāra. At first this centred around the Buddha Akṣobhya and his buddhafield Abhirati, but he was soon eclipsed by Amitābha who lives in Sukhāvati and is much less demanding: a single act of recalling his name (nāmānusmṛti) is enough to draw his attention and he comes to our universe to collect us after death so that we are reborn in Sukhāvati and from there attain liberation from rebirth. The two sutras that describe this are both called the Sukhāvativyūha Sūtra. They seem to have appeared around the same time as Prajñāpāramitā literature and have proved to be amongst the most influential texts in Buddhist history. It's likely that theistic Pure Land followers are the majority of all Buddhists worldwide.
Such stories are mythological. That is to say, these stories reflect the values of some Buddhists at some point in time and space, expressed in symbolic, often anthropomorphic, terms. The stories don't reflect actual events. Myths are not objective histories to inform us about the past. As noted, Buddhist myths reflect a growing dissatisfaction with the idea that Gautama simply left us behind when he ended his own stream of rebirths. A really good person, they reasoned, would have stuck around to give us a helping hand: who could look at the world and not conclude that it desperately needs help? Not me. The Buddha was supposed to be the epitome of good.
A little later a related idea emerges, i.e. the idea of a pluralistic Buddha who at one level seemed to be a human man, but the mortal man was merely a material manifestation of a timeless, immaterial, undying principle of awakening. The issue of the Buddha's apparently short lifespan is tackled in this way in the Suvarṇabhāsottama Sūtra (aka the Golden Light Sutra). These are religious myths, but Buddhists the world over either believe that they are objectively true or behave as if they describe reality. Again, this is theism, turning the Buddha into a god.
At around the same time as these myths were emerging and taking Buddhism in innovative directions, some Buddhists, notably one known as Nāgārjuna, began to assert that the absence of sensory experience is reality. This is the essence of Madhyamaka metaphysics, for example. We often see this stated as "emptiness is reality" as though this means something, although I think it does not. Mādhyamikas also say things like "dharmas don't exist", although whether or not Nāgārjuna said this or even implied it is moot. The problem here is that although there is a state in which all sensory experiences cease, asserting that this state is reality is problematic since it lumps all phenomena into the "not real" category, which is completely absurd and creates paradoxes. In short, reifying the absence of experience following gets us nowhere. But some Buddhists still value the contradictions and paradoxes that this stance throws up. They seem to find the existence of paradox as confirmation that they are on the right track whereas I would say that a paradox either reflects our ignorance or a mistake. In the case of Prajñāpāramitā it is both: we were naively ignorant of the context and misled by the lies of Edward Conze (et al) to believe that paradox was normal when, in point of fact, paradox and contradiction play no role in Buddhism until substantially later.
Not Doing Metaphysics
Talk of grand abstractions like truth, reality, and existence all comes under the heading of metaphysics. Anyone who gives an opinion on "reality", let alone the "nature of reality" is ipso facto doing metaphysics. Hence, I do not believe Mādhyamikas when they claim not to be doing metaphysics but assert that they understand or have experienced the nature of reality.
Humans are constantly trying to discern the reality that lays behind or beyond sensory experience because we all know that our eyes can be deceived. In modern terms, the world we experience is a virtual model created by the brain (as demonstrated, for example, by phantom limb syndrome or the Capgras delusion). The better our model of the world is, the better our chances of survival and procreation. Most of us are not naive realists. We do understand that reality and experience are not identical and we strive to minimise the differences or errors. When we foreground this in our thinking we may become more reticent about drawing conclusions about reality based on unusual experiences.
When someone makes an assertion about reality or has an opinion on what is real, it is always legitimate to ask "How do you know?". Doing this we find that Buddhists place high value and significance on experiences in meditation. Some of these experiences have all the hallmarks of hallucinations caused by the brain's response to sensory deprivation. In the end, the one thing that makes all the difference is the fact that sensory experience can cease, though I still hesitate to call this "an experience". Along the way we lose our sense of our body, our sense of self, and our sense of a world "out there". In the end, when all sensory experience has stopped and we are still alive and aware, we find ourselves in an contentless but nonetheless hyperreal state that begs to be assigned meaning and significance. The cessation of the sense of self, for example, is often seen as evidence of the nonexistence of self.
However, "I don't see it" and "It doesn't exist" are very much not the same thing.
The mystic says that the experience of, say, selflessness, is sufficient to establish that our "self" is not real. This is a metaphysical conclusion. But it's also solipsistic (i.e. egocentric). One of my most striking memories of timelessness in meditation was on a long retreat. I was deeply concentrated and sat on after the bell rang for the conclusion of the session. While I was there not noticing the passing of time, the other retreatants prepared, cooked, and served a meal. That took time; about one hour in fact. Time that I didn't notice passing. The obvious conclusion here is not "time is not real" or "time doesn't exist", but that I was unaware of time passing for about an hour while everyone around me had a pretty normal experience of time. This is an epistemic conclusion. It lacks the panache and glamour of metaphysics, it doesn't cast me as the hero of the story, but it's more intellectually honest.
The weight of evidence is that most of these kinds of metaphysical conclusions that appeal to Buddhists are factually wrong. What other conclusion might someone who has experienced, say, the cessation of their self come to? I like to use the example of Gary Weber who reports that he has no sense of self. I find Weber very credible, so I believe him when he says that he doesn't experience much if any sense of self. And yet, wildly contrary to Buddhist doctrine, he takes this to mean that everything that happens is predetermined and events unfold without any influence from us whatever. He will tell you that we don't really make decisions, we are just carried along falsely believing that our desires cause our actions, when in fact it's all just a fixed set of events playing out as they were always going to. Clearly this is a very different metaphysical conclusion than your average Buddhist would arrive at based on experiences that seem to be exactly the same
An alternative explanation that occurs to me is that the apparently selfless might conclude that selfing, the activities of the self, is now going on unconsciously. This would help explain why a person with "no self" is able to carry on a conversation for example, as Gary Weber obviously does. A conversation is a complex social interaction in which each participant has to keep track of who said what to whom, and whose turn it is to talk. It seems to me that this would be impossible without some sense of self/other dichotomy. If someone who has no sense of self is conversing normally, we might want to conclude that their selfing was now unconscious. Unconscious selfing presents fewer problems than conscious selfing, because the role of self-centeredness is reduced. Moreover it is considerably less problematic than the view that no self exists, even in people who sincerely believe that they are experiencing themselves as a self from moment to moment.
In Triratna we often talk about this in psychological terms, particularly in terms of the subject/object duality "breaking down". Many, perhaps most, of us take this to mean that the subject/object duality is not real. The corollary, that the absence of a subject/object duality is reality, follows but all the same caveats apply to us as to others. Just because we can experience the subject/object duality breaking down, does not mean that it is not real or that it doesn't exist. And so on.
The Alternative: An Epistemic Approach
Reality is a complex subject. And the relationship of experience to reality is not clear either in Buddhism or in some modern accounts of mind.
The mind-body problem is one of the most famous philosophical conundrums. My own view is that the dichotomy is not really one of mind and body but is, more fundamentally, a matter-spirit dichotomy. That is, I take the distinction to have deeper roots in our basic ideas about the material world and another world of invisible life-force often associated with the afterlife. This is a prominent topic in my book Karma and Rebirth Reconsidered and in a range of blog posts. My sense is that while most scientists now eschew the grosser forms of matter-spirit dualism (since they don't believe in "spirit"), the average person still has a profoundly dualistic outlook. Almost everyone I know believes in an afterlife for example, and this necessitates some ontological dualism.
In epistemic terms the subject/object duality is real since we get information about subjectivity and objectivity through completely different sensory modalities: introspection and extrospection. One way of thinking about meditation is that it shuts down extrospection and leaves us in a purely subjective state. If we mistake this purely subjective state for objective reality, then we may be tempted into the conclusion that "mind makes reality", but this requires that we give no value whatever to objectivity. And this seems a perverse way of thinking about it.
Dualisms are deeply embedded in how humans conceptualise the world. And when we take the distinctions to be metaphysical, as we do in matter-spirit dualisms, we find ourselves in tricky territory. What usually happens is that having divided the world into two, we dismiss one part (usually matter) as unreal. Materialism, as John Searle pointed out, is a dualism in which proponents divide the world into material and non-material halves and declare the non-material to be unreal. This manoeuvre has consequences. If the mind is non-material, then the materialist is left with no explanation of it except to argue that it is an illusion.
An epistemic approach to this problem rapidly finds purchase and leverage over this particular dualism. As I say, there is an obvious epistemic distinction between how we get information about the world and how we get information about ourselves. We have a range of external senses that inform us about the world in particular modalities: sight, sound, smell, taste, touch. This information allows us to construct virtual models of the world that are efficient for navigating the world. We have a different set of senses for the internal states of our body, many of which are not available to introspection or conscious control (e.g. blood sugar levels). Notably our mindthoughts, feelings, emotions, etcis an important source of information about our own internal states.
There is some crossover, as when we gain information about our body by looking at it. But generally speaking there is a clear epistemic distinction between "in here" and "out there". Just as there is an epistemic distinction between, say, seeing light reflected from an object and hearing the physical vibrations that it makes. To my knowledge, and despite the phrase "seeing is believing", no one has ever argued that seeing is real and hearing is unreal, or vice versa. We acknowledge that both occur, that they are different modes of sensing, and give us different information. And we can always ask another person, "Did you see/hear that?" and compare notes. Problems emerge when we jump to metaphysical conclusions based on epistemic differences without first establishing whether there is some metaphysical basis for the differences.
As far as anyone can tell, there is no mind/body dualism in the sense that they are different substances. But that said, we do have an undeniable experience of an epistemic difference between mind and body. We gain knowledge of each in different ways. One cannot introspect an external object for example. Nor can one use empathy to project the emotional disposition of a non-sentient object. When I put my cup down I don't wonder how the table will feel about it. There is no way for the table to support sentience let alone forming an opinion.
At this point, Buddhist cite mystical experiences as evidence for their conclusions. The problem with mystical experiences, is that they are interpreted differently according to one's preferences. I have already cited the example of Gary Weber, the Advaita Vedantin. But there are also Christian mystics, for example who interpret what seem like the same experiences as evidence for the existence of God.
Everyone is trying to make sense of their world. Some go about it more systematically than others. The less systematic our approach, the more likely that errors and infelicities will creep into our worldview. Early Buddhists systematically explored mental states that occur in the process of withdrawing attention from sensory experience. The results are practices that we call "meditation", a word that goes back to an Indo-European root *med and (rather appropriately) means "take appropriate measures". Early Buddhists did not systematically investigate anything else. They showed no interest in "reality" or the "nature" of reality, except insofar as it pertained to karma and rebirth, which they accepted a priori as true.
Here we see the disadvantage of religious modes of thinking. Religieux begin reasoning from a metaphysical commitment; a belief. And recall Michael Taft's aphorism: belief is an emotion about an idea. From the belief, religieux look for evidence that is consistent with that belief and hold it up as confirmation of the belief. At the same time they overlook, ignore, or dispose of any counterfactual information.
Religious metaphysics are not motivated by a search for the truth. Religieux invariably believe they already know the truth. This applies to Buddhists as much as any other religion. We start from certainty and then inquire as to how reality confirms our assumptions. A procedure known as confirmation bias
Buddhist metaphysics, of which there are several, are fine except that they disagree in every possible way with physics. Buddhists who are aware of this fact (and appalled by it) will often invoke Eugene Wigner's version of Niels Bohr's interpretation of the Schrödinger equation, i.e. "consciousness collapses the wavefunction". Back in the real world, physicists universally agree that Wigner was talking bollocks, and most of them have abandoned Copenhagen (though they continue to teach it to undergraduates). Buddhists seldom, if ever, come out in defence of other valid interpretations of the Schrödinger equation. We see no Buddhist essays arguing that, for example, Bohmian mechanics (aka pilot-wave theory) reflects the Buddha's insight. Buddhists are attracted to the deprecated Copenhagen interpretation because purely by confirmation bias. There really is no connection between the Iron Age observations of Buddhists about how their minds work and the twentieth century observations about how matter changes over time on the nanoscale.
That said, like other physicists, Bohm himself later went into the business of speculative metaphysics. It is a quirk of many physicist that they start to believe that they really do understand everything. There are any number of books of unscientific (but influential) nonsense from people like Eugene Wigner, Linus Pauling (Vitamin C), and including Bohm himself, and even the Venerable Albert Einstein.
When I began to adopt an epistemic approach to Buddhism, I realised that I no longer had any conflicts with my education in the physical sciences (I majored in chemistry).
Buddhist metaphysics as reflected in various texts across time have no advantages over any other religious metaphysics. The Buddhist worldview is always stated in such a way as to allow for the supernatural (or what I sometimes call "the unnatural"): karma, rebirth, gods, demons, spirits, heavens, hells, ESP, etc. At best these views approach the sophistication of Descartes, accepting a dualistic world in order to preserve a place for non-natural entities, forces, locations, and events. Buddhism provides us with nothing approaching the physical laws of nineteenth century science. No equivalent to, say, the universal principle of conservation of momentum. Which is hardly surprising given that most Buddhists think the real world is "an illusion" and that a "spiritual" Reality is to be found in purely subjective mental states. Why would this approach produce any insights into the real?
While religious Buddhists have an ongoing battle with the real, in that it clearly does not conform to Buddhist orthodoxy, I no longer have this problem. I no longer feel any tension between my scientific outlook and my Buddhist vocation based on working with my mind. They are two distinct provinces of knowledge, at least for the time being.
Why does this matter? I think religion in Europe (and her colonies), generally, is struggling with two tendencies: the tendency towards fundamentalism and the tendency towards rationalism. The former stymies all intellectual progress, while the latter sees no value in religion. We've all watched secular mindfulness rapidly become very much more popular than religious Buddhism. We've seen many emotive arguments against practising mindfulness outside of the metaphysical commitments held by most Buddhists. How much worse will it be when we begin to see secular training in attention withdrawal (if it does not already exist) and a secular "enlightenment". That could easily eclipse European Buddhism, though my sense is that Asian Buddhism is more insulated from this kind of discourse.
Central to my faith in 2022 is this credo: I believe that sensory experience can cease without loss of basic awareness. I believe this knowledge was discovered in ancient India and became the basis for a number of religions.
Although the result is described as "contentless awareness" those who undergo this can remember what it was like and are usually eager to offer an interpretation. To date religious explanations have dominated the field. Nascent academic attempts to characterise and categorise such phenomena are fascinating, but still lack coherence. While I mainly write for a Buddhist audience, I kind of hope that some academic will also notice my epistemic approach and see how it disentangles religious sentiments from the difficult work of identifying and characterising what is real.
In this sense, then, I think enlightenment, awakening, liberation, purification, or whatever we call it, is a real phenomena. I feel fairly confident that I've met people who are "in that state" (tathā-gata, as we say in Pāli). And scientists are right now measuring the neural activity of people in a state of contentless awareness looking for, and finding, neural correlates of cessation and awakening. Where Buddhism and science part company is precisely where all religions breakdown, that is on the interpretation of experience, especially with respect to what experience tells us about "reality".
Cessation is something we can systematically cultivate. The way to cultivate it is to minimise sensory experience, both in daily life and more radically in meditation. The goal of practice is a form of knowledge, not a form of existence. We call this knowledge prajñā or paragnosis, knowledge from beyond the cessation of sensory experience. Without the supernatural elements, with the view that the Buddha was talking about experience rather than reality, we can drop all the metaphysical speculation about what it all means, and arrive at a simpler, more coherent view of Buddhism, that has realistic goals for maximising human potential.
Related Posts with Thumbnails |
396a3ffdc196cdd6 | Editor's note: The following is a text-only version. The complete version with artwork is available for purchase here (PDF).
The story of the periodic system for classifying the elements can be traced back over 200 years. Throughout its long history, the periodic table has been disputed, altered and improved as science has progressed and as new elements have been discovered [see “Making New Elements,” by Peter Armbruster and Fritz Peter Hessberger]. But despite the dramatic changes that have taken place in science over the past century—namely, the development of the theories of relativity and quantum mechanics—there has been no revolution in the basic nature of the periodic system. In some instances, new findings initially appeared to call into question the theoretical foundations of the periodic table, but each time scientists eventually managed to incorporate the results while preserving the table’s fundamental structure. Remarkably, the periodic table is thus notable both for its historical roots and for its modern relevance.
The term “periodic” reflects the fact that the elements show patterns in their chemical properties in certain regular intervals. Were it not for the simplification provided by this chart, students of chemistry would need to learn the properties of all 112 known elements. Fortunately, the periodic table allows chemists to function by mastering the properties of a handful of typical elements; all the others fall into so-called groups or families with similar chemical properties. (In the modern periodic table, a group or family corresponds to one vertical column.)
The discovery of the periodic system for classifying the elements represents the culmination of a number of scientific developments, rather than a sudden brainstorm on the part of one individual. Yet historians typically consider one event as marking the formal birth of the modern periodic table: on February 17, 1869, a Russian professor of chemistry, Dimitri Ivanovich Mendeleev, completed the first of his numerous periodic charts. It included 63 known elements arranged according to increasing atomic weight; Mendeleev also left spaces for as yet undiscovered elements for which he predicted atomic weights.
Prior to Mendeleev’s discovery, however, other scientists had been actively developing some kind of organizing system to describe the elements. In 1787, for example, French chemist Antoine Lavoisier, working with Antoine Fourcroy, Louis-Bernard Guyton de Morveau and Claude-Louis Berthollet, devised a list of the 33 elements known at the time. Yet such lists are simply onedimensional representations. The power of the modern table lies in its two- or even three-dimensional display of all the known elements (and even the ones yet to be discovered) in a logical system of precisely ordered rows and columns.
In an early attempt to organize the elements into a meaningful array, German chemist Johann Döbereiner pointed out in 1817 that many of the known elements could be arranged by their similarities into groups of three, which he called triads. Döbereiner singled out triads of the elements lithium, sodium and potassium as well as chlorine, bromine and iodine. He noticed that if the three members of a triad were ordered according to their atomic weights, the properties of the middle element fell in between those of the first and third elements. For example, lithium, sodium and potassium all react vigorously with water. But lithium, the lightest of the triad, reacts more mildly than the other two, whereas the heaviest of the three, potassium, explodes violently. In addition, Döbereiner showed that the atomic weight of the middle element is close to the average of the weights for the first and third members of the triad. Döbereiner’s work encouraged others to search for correlations between the chemical properties of the elements and their atomic weights. One of those who pursued the triad approach further during the 19th century was Peter Kremers of Cologne, who suggested that certain elements could belong to two triads placed perpendicularly. Kremers thus broke new ground by comparing elements in two directions, a feature that later proved to be an essential aspect of Mendeleev’s system.
In 1857 French chemist Jean-Baptiste- André Dumas turned away from the idea of triads and focused instead on devising a set of mathematical equations that could account for the increase in atomic weight among several groups of chemically similar elements. But as chemists now recognize, any attempt to establish an organizing pattern based on an element’s atomic weight will not succeed, because atomic weight is not the fundamental property that characterizes each of the elements.
Periodic Properties
The crucial characteristic of Mendeleev’s system was that it illustrated a periodicity, or repetition, in the properties of the elements at certain regular intervals. This feature had been observed previously in an arrangement of elements by atomic weight devised in 1862 by French geologist Alexandre- Emile Béguyer de Chancourtois. The system relied on a fairly intricate geometric configuration: de Chancourtois positioned the elements according to increasing atomic weight along a spiral inscribed on the surface of a cylinder and inclined at 45 degrees from the base.
The first full turn of the spiral coincided with the element oxygen, and the second full turn occurred at sulfur. Elements that lined up vertically on the surface of the cylinder tended to have similar properties, so this arrangement succeeded in capturing some of the patterns that would later become central to Mendeleev’s system. Yet for a number of reasons, de Chancourtois’s system did not have much effect on scientists of the time: his original article failed to include a diagram of the table, the system was rather complicated, and the chemical similarities among elements were not displayed very convincingly.
Several other researchers put forward their own versions of a periodic table during the 1860s. Using newly standardized values for atomic weights, English chemist John Newlands suggested in 1864 that when the elements were arranged in order of atomic weight, any one of the elements showed properties similar to those of the elements eight places ahead and eight places behind in the list—a feature that Newlands called “the law of octaves.”
In his original table, Newlands left empty spaces for missing elements, but his more publicized version of 1866 did not include these open slots. Other chemists immediately raised objections to the table because it would not be able to accommodate any new elements that might be discovered. In fact, some investigators openly ridiculed Newlands’s ideas. At a meeting of the Chemical Society in London in 1866, George Carey Foster of University College London asked Newlands whether he had considered ordering the elements alphabetically, because any kind of arrangement would present occasional coincidences. As a result of the meeting, the Chemical Society refused to publish Newlands’s paper.
Despite its poor reception, however, Newlands’s work does represent the first time anyone used a sequence of ordinal numbers (in this case, one based on the sequence of atomic weights) to organize the elements. In this respect, Newlands anticipated the modern organization of the periodic table, which is based on the sequence of so-called atomic numbers. (The concept of atomic number, which indicates the number of protons present within an atom’s nucleus, was not established until the early 20th century.)
The Modern Periodic Table
Chemist Julius Lothar Meyer of Breslau University in Germany, while in the process of revising his chemistry textbook in 1868, produced a periodic table that turned out to be remarkably similar to Mendeleev’s famous 1869 version—although Lothar Meyer failed to classify all the elements correctly. But the table did not appear in print until 1870 because of a publisher’s delay—a factor that contributed to an acrimonious dispute for priority that ensued between Lothar Meyer and Mendeleev.
Around the same time, Mendeleev assembled his own periodic table while he, too, was writing a textbook of chemistry. Unlike his predecessors, Mendeleev had sufficient confidence in his periodic table to use it to predict several new elements and the properties of their compounds. He also corrected the atomic weights of some already known elements. Interestingly, Mendeleev admitted to having seen certain earlier tables, such as those of Newlands, but claimed to have been unaware of Lothar Meyer’s work when developing his chart.
Although the predictive aspect of Mendeleev’s table was a major advance, it seems to have been overemphasized by historians, who have generally suggested that Mendeleev’s table was accepted especially because of this feature. These scholars have failed to notice that the citation from the Royal Society of London that accompanied the Davy Medal (which Mendeleev received in 1882) makes no mention whatsoever of his predictions. Instead Mendeleev’s ability to accommodate the already known elements may have contributed as much to the acceptance of the periodic system as did his striking predictions. Although numerous scientists helped to develop the periodic system, Mendeleev receives most of the credit for discovering chemical periodicity because he elevated the discovery to a law of nature and spent the rest of his life boldly examining its consequences and defending its validity.
Defending the periodic table was no simple task—its accuracy was frequently challenged by subsequent discoveries. One notable occasion arose in 1894, when William Ramsay of University College London and Lord Rayleigh (John William Strutt) of the Royal Institution in London discovered the element argon; over the next few years, Ramsay announced the identification of four other elements—helium, neon, krypton and xenon—known as the noble gases. (The last of the known noble gases, radon, was discovered in 1900 by German physicist Friedrich Ernst Dorn.)
The name “noble” derives from the fact that all these gases seem to stand apart from the other elements, rarely interacting with them to form compounds. As a result, some chemists suggested that the noble gases did not even belong in the periodic table. These elements had not been predicted by Mendeleev or anyone else, and only after six years of intense effort could chemists and physicists successfully incorporate the noble gases into the table. In the new arrangement, an additional column was introduced between the halogens (the gaseous elements fluorine, chlorine, bromine, iodine and astatine) and the alkali metals (lithium, sodium, potassium, rubidium, cesium and francium).
A second point of contention surrounded the precise ordering of the elements. Mendeleev’s original table positioned the elements according to atomic weight, but in 1913 Dutch amateur theoretical physicist Anton van den Broek suggested that the ordering principle for the periodic table lay instead in the nuclear charge of each atom. Physicist Henry Moseley, working at the University of Manchester, tested this hypothesis, also in 1913, shortly before his tragic death in World War I. Moseley began by photographing the x-ray spectrum of 12 elements, 10 of which occupied consecutive places in the periodic table. He discovered that the frequencies of features called K-lines in the spectrum of each element were directly proportional to the squares of the integers representing the position of each successive element in the table. As
Moseley put it, here was proof that “there is in the atom a fundamental quantity, which increases by regular steps as we pass from one element to the next.” This fundamental quantity, first referred to as atomic number in 1920 by Ernest Rutherford, who was then at the University of Cambridge, is now identified as the number of protons in the nucleus.
Moseley’s work provided a method that could be used to determine exactly how many empty spaces remained in the periodic table. After this discovery, chemists turned to using atomic number as the fundamental ordering principle for the periodic table, instead of atomic weight. This change resolved many of the lingering problems in the arrangement of the elements. For example, when iodine and tellurium were ordered according to atomic weight (with iodine first), the two elements appeared to be incorrectly positioned in terms of their chemical behavior. When ordered according to atomic number (with tellurium first), however, the two elements were in their correct positions.
Understanding the Atom
The periodic table inspired the work not only of chemists but also of atomic physicists struggling to understand the structure of the atom. In 1904, working at Cambridge, physicist J. J. Thomson (who also discovered the electron) developed a model of the atom, paying close attention to the periodicity of the elements. He proposed that the atoms of a particular element contained a specific number of electrons arranged in concentric rings. Furthermore, according to Thomson, elements with similar configurations of electrons would have similar properties; Thomson’s work thus provided the first physical explanation for the periodicity of the elements. Although Thomson imagined the rings of electrons as lying inside the main body of the atom, rather than circulating around the nucleus as is believed today, his model does represent the first time anyone addressed the arrangement of electrons in the atom, a concept that pervades the whole of modern chemistry.
Danish physicist Niels Bohr, the first to bring quantum theory to bear on the structure of the atom, was also motivated by the arrangement of the elements in the periodic system. In Bohr’s model of the atom, developed in 1913, electrons inhabit a series of concentric shells that encircle the nucleus. Bohr reasoned that elements in the same group of the periodic table might have identical configurations of electrons in their outermost shell and that the chemical properties of an element would depend in large part on the arrangement of electrons in the outer shell of its atoms.
Bohr’s model of the atom also served to explain why the noble gases lack reactivity: noble gases possess full outer shells of electrons, making them unusually stable and unlikely to form compounds. Indeed, most other elements form compounds as a way to obtain full outer electron shells. More recent analysis of how Bohr arrived at these electronic configurations suggests that he functioned more like a chemist than has generally been credited. Bohr did not derive electron configurations from quantum theory but obtained them from the known chemical and spectroscopic properties of the elements.
In 1924 another physicist, Austrianborn Wolfgang Pauli, set out to explain the length of each row, or period, in the table. As a result, he developed the Pauli Exclusion Principle, which states that no two electrons can exist in exactly the same quantum state, which is defined by what scientists call quantum numbers. The lengths of the various periods emerge from experimental evidence about the order of electron-shell filling and from the quantum-mechanical restrictions on the four quantum numbers that electrons can adopt.
The modifications to quantum theory made by Werner Heisenberg and Erwin Schrödinger in the mid-1920s yielded quantum mechanics in essentially the form used to this day. But the influence of these changes on the periodic table has been rather minimal. Despite the efforts of many physicists and chemists, quantum mechanics cannot explain the periodic table any further. For example, it cannot explain from first principles the order in which electrons fill the various electron shells. The electronic configurations of atoms, on which our modern understanding of the periodic table is based, cannot be derived using quantum mechanics (this is because the fundamental equation of quantum mechanics, the Schrödinger equation, cannot be solved exactly for atoms other than hydrogen). As a result, quantum mechanics can only reproduce Mendeleev’s original discovery by the use of mathematical approximations—it cannot predict the periodic system.
Variations on a Theme
In more recent times, researchers have proposed different approaches for displaying the periodic system. For instance, Fernando Dufour, a retired chemistry professor from Collège Ahuntsic in Montreal, has developed a three-dimensional periodic table, which displays the fundamental symmetry of the periodic law, unlike the common two-dimensional form of the table in common use. The same virtue is also seen in a version of the periodic table shaped as a pyramid, a form suggested on many occasions but most recently refined by William B. Jensen of the University of Cincinnati.
Another departure has been the invention of periodic systems aimed at summarizing the properties of compounds rather than elements. In 1980 Ray Hefferlin of Southern Adventist University in Collegedale, Tenn., devised a periodic system for all the conceivable diatomic molecules that could be formed between the first 118 elements (only 112 have been discovered to date).
Hefferlin’s chart reveals that certain properties of molecules—the distance between atoms and the energy required to ionize the molecule, for instance—occur in regular patterns. This table has enabled scientists to predict the properties of diatomic molecules successfully.
In a similar effort, Jerry R. Dias of the University of Missouri at Kansas City devised a periodic classification of a type of organic molecule called benzenoid aromatic hydrocarbons. The compound naphthalene (C10H8), found in mothballs, is the simplest example. Dias’s classification system is analogous to Döbereiner’s triads of elements: any central molecule of a triad has a total number of carbon and hydrogen atoms that is the mean of the flanking entries, both downward and across the table. This scheme has been applied to a systematic study of the properties of benzenoid aromatic hydrocarbons and, with the use of graph theory, has led to predictions of the stability and reactivity of some of these compounds.
Still, it is the periodic table of the elements that has had the widest and most enduring influence. After evolving for over 200 years through the work of many people, the periodic table remains at the heart of the study of chemistry. It ranks as one of the most fruitful ideas in modern science, comparable perhaps to Charles Darwin’s theory of evolution. Unlike theories such as Newtonian mechanics, it has not been falsified or revolutionized by modern physics but has adapted and matured while remaining essentially unscathed.
Further Reading
The Periodic System of Chemical Elements: A History of the First Hundred Years. J. W. van Spronsen. Elsevier, 1969.
The Surprising Periodic Table: Ten Remarkable Facts. Dennis H. Rouvray in Chemical Intelligencer, Vol. 2, No. 3, pages 39–47; July 1996.
Classification, Symmetry and the Periodic Table. William B. Jensen in Computing and Mathematics with Applications, Vol. 12B, Nos. 1–2, pages 487–510; 1989.
Plus ça Change. E. R. Scerri in Chemistry in Britain, Vol. 30, No. 5, pages 379–381; May 1994.
The Electron and the Periodic Table. Eric R. Scerri in American Scientist, Vol. 85, pages 546–553; November–December 1997. |
a1c9ae845ebe41f0 | Math Is Fun Forum
You are not logged in.
#1226 2021-12-18 00:27:49
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1202) Ecosystem
An ecosystem (or ecological system) consists of all the organisms and the physical environment with which they interact. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals play an important role in the movement of matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them.
Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy.
Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration is thought to contribute to all 17 Sustainable Development Goals.
(The Sustainable Development Goals (SDGs) or Global Goals are a collection of 17 interlinked global goals designed to be a "blueprint to achieve a better and more sustainable future for all". The SDGs were set up in 2015 by the United Nations General Assembly (UN-GA) and are intended to be achieved by the year 2030. They are included in a UN-GA Resolution called the 2030 Agenda or what is colloquially known as Agenda 2030. The SDGs were developed in the Post-2015 Development Agenda as the future global development framework to succeed the Millennium Development Goals which ended in 2015.
Though the goals are broad and interdependent, two years later (6 July 2017) the SDGs were made more "actionable" by a UN Resolution adopted by the General Assembly. The resolution identifies specific targets for each goal, along with indicators that are being used to measure progress toward each target. The year by which the target is meant to be achieved is usually between 2020 and 2030. For some of the targets, no end date is given.
To facilitate monitoring, a variety of tools exist to track and visualize progress towards the goals. All intention is to make data more available and easily understood.[5] For example, the online publication SDG Tracker, launched in June 2018, presents available data across all indicators. The SDGs pay attention to multiple cross-cutting issues, like gender equity, education, and culture cut across all of the SDGs. There were serious impacts and implications of the COVID-19 pandemic on all 17 SDGs in the year 2020.)
A brief treatment of ecosystems follows.
An ecosystem can be categorized into its abiotic constituents, including minerals, climate, soil, water, sunlight, and all other nonliving elements, and its biotic constituents, consisting of all its living members. Linking these constituents together are two major forces: the flow of energy through the ecosystem and the cycling of nutrients within the ecosystem. Ecosystems vary in size: some are small enough to be contained within single water droplets while others are large enough to encompass entire landscapes and regions.
Energy flow
The fundamental source of energy in almost all ecosystems is radiant energy from the Sun. The energy of sunlight is used by the ecosystem’s autotrophic, or self-sustaining, organisms (that is, those that can make their own food). Consisting largely of green vegetation, these organisms are capable of photosynthesis—i.e., they can use the energy of sunlight to convert carbon dioxide and water into simple, energy-rich carbohydrates. The autotrophs use the energy stored within the simple carbohydrates to produce the more complex organic compounds, such as proteins, lipids, and starches, that maintain the organisms’ life processes. The autotrophic segment of the ecosystem is commonly referred to as the producer level.
Trophic levels
ogether, the autotrophs and heterotrophs form various trophic (feeding) levels in the ecosystem: the producer level (which is made up of autotrophs), the primary consumer level (which is composed of those organisms that feed on producers), the secondary consumer level (which is composed of those organisms that feed on primary consumers), and so on. The movement of organic matter and energy from the producer level through various consumer levels makes up a food chain. For example, a typical food chain in a grassland might be grass (producer) → mouse (primary consumer) → snake (secondary consumer) → hawk (tertiary consumer). Actually, in many cases the food chains of the ecosystem’s biological community overlap and interconnect, forming what ecologists call a food web. The final link in all food chains is made up of decomposers, those heterotrophs (such as scavenging birds and mammals, insects, fungi, and bacteria) that break down dead organisms and organic wastes into smaller and smaller components, which can later be used by producers as nutrients. A food chain in which the primary consumer feeds on living plants is called a grazing pathway, and a food chain in which the primary consumer feeds on dead plant matter is known as a detritus pathway. Both pathways are important in accounting for the energy budget of the ecosystem.
Nutrient cycling
Nutrients are chemical elements and compounds that organisms must obtain from their surroundings for growth and the sustenance of life. Although autotrophs obtain nutrients primarily from the soil while heterotrophs obtain nutrients primarily from other organisms, the cells of each are made up primarily of six major elements that occur in similar proportions in all life-forms. These elements—hydrogen, oxygen, carbon, nitrogen, phosphorus, and sulfur—form the core protoplasm (that is, the semifluid substance that makes up a cell’s cytoplasm and nucleus) of organisms. The first four of these elements make up about 99 percent of the mass of most cells. Additional elements, however, are also essential to the growth of organisms. Calcium and other elements help to form cellular support structures such as shells, internal or external skeletons, and cell walls. Chlorophyll molecules, which allow photosynthetic plants to convert solar energy into chemical energy, are chains of carbon, hydrogen, and oxygen compounds built around a magnesium ion. Altogether, 16 elements are found in all organisms; another eight elements are found in some organisms but not in others.
These bioelements combine with one another to form a wide variety of chemical compounds. They occur in organisms in higher proportions than they do in the environment because organisms capture them, concentrating and combining them in various ways in their cells, and release them during metabolism and death. As a result, these essential nutrients alternate between inorganic and organic states as they rotate through their respective biogeochemical cycles: the carbon cycle, the oxygen cycle, the nitrogen cycle, the sulfur cycle, the phosphorous cycle, and the water cycle. These cycles can include all or part of the following environmental spheres: the atmosphere, which is made up largely of gases including water vapour; the lithosphere, which encompasses the soil and the entire solid crust of Earth; the hydrosphere, which includes lakes, rivers, oceans, groundwater, frozen water, and (along with the atmosphere) water vapour; and the biosphere, which includes all living things and overlaps with each of the other environmental spheres.
#1227 2021-12-19 00:35:15
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1203) Cell wall
A cell wall is a structural layer surrounding some types of cells, just outside the cell membrane. It can be tough, flexible, and sometimes rigid. It provides the cell with both structural support and protection, and also acts as a filtering mechanism. Cell walls are absent in animals but are present in most other eukaryotes including algae, fungi and plants and in most prokaryotes (except mollicute bacteria). A major function is to act as pressure vessels, preventing over-expansion of the cell when water enters.
The composition of cell walls varies between taxonomic group and species and may depend on cell type and developmental stage. The primary cell wall of land plants is composed of the polysaccharides cellulose, hemicelluloses and pectin. Often, other polymers such as lignin, suberin or cutin are anchored to or embedded in plant cell walls. Algae possess cell walls made of glycoproteins and polysaccharides such as carrageenan and agar that are absent from land plants. In bacteria, the cell wall is composed of peptidoglycan. The cell walls of archaea have various compositions, and may be formed of glycoprotein S-layers, pseudopeptidoglycan, or polysaccharides. Fungi possess cell walls made of the N-acetylglucosamine polymer chitin. Unusually, diatoms have a cell wall composed of biogenic silica.
Cell wall is a specialized form of extracellular matrix that surrounds every cell of a plant. The cell wall is responsible for many of the characteristics that distinguish plant cells from animal cells. Although often perceived as an inactive product serving mainly mechanical and structural purposes, the cell wall actually has a multitude of functions upon which plant life depends. Such functions include: (1) providing the living cell with mechanical protection and a chemically buffered environment, (2) providing a porous medium for the circulation and distribution of water, minerals, and other small nutrient molecules, (3) providing rigid building blocks from which stable structures of higher order, such as leaves and stems, can be produced, and (4) providing a storage site of regulatory molecules that sense the presence of pathogenic microbes and control the development of tissues.
Certain prokaryotes, algae, slime molds, water molds, and fungi also have cell walls. Bacterial cell walls are characterized by the presence of peptidoglycan, whereas those of Archaea characteristically lack this chemical. Algal cell walls are similar to those of plants, and many contain specific polysaccharides that are useful for taxonomy. Unlike those of plants and algae, fungal cell walls lack cellulose entirely and contain chitin. The scope of this article is limited to plant cell walls.
Mechanical properties
Matrix polysaccharides
Although plant cell walls contain only small amounts of protein, they serve a number of important functions. The most prominent group are the hydroxyproline-rich glycoproteins, shaped like rods with connector sites, of which extensin is a prominent example. Extensin contains 45 percent hydroxyproline and 14 percent serine residues distributed along its length. Every hydroxyproline residue carries a short side chain of arabinose sugars, and most serine residues carry a galactose sugar. This gives rise to long molecules, resembling bottle brushes, that are secreted into the cell wall toward the end of primary wall formation and become covalently cross-linked into a mesh at the time that cell growth stops. Plant cells may control their ultimate size by regulating the time at which this cross-linking of extensin molecules occurs.
Intercellular communication:
Oligosaccharides with regulatory functions
#1228 2021-12-20 00:34:08
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1204) Grande Dixence Dam
Grande Dixence was the tallest dam in the world until completion of the Nurek Dam in the Soviet Union in 1980. It was built in annual stages, a procedure necessary because the Alpine working season is quite short.
The Grande Dixence Dam is a concrete gravity dam on the Dixence at the head of the Val d'Hérémence in the canton of Valais in Switzerland. At 285 m (935 ft) high, it is the tallest gravity dam in the world, fifth tallest dam overall, and the tallest dam in Europe. It is part of the Cleuson-Dixence Complex. With the primary purpose of hydroelectric power generation, the dam fuels four power stations, totaling the installed capacity to 2,069 MW, generating approximately 2,000 GWh annually, enough to power 400,000 Swiss households.
The dam withholds Lac des Dix (Lake Dix), its reservoir. With a surface area of 4 sq km, it is the second largest lake in Valais and the largest lake above 2,000 m in the Alps. The reservoir receives its water from four different pumping stations; the Z’Mutt, Stafel, Ferpècle and Arolla. At peak capacity, it contains approximately 400,000,000 m^3 (1.4 × {10}^{10} cu ft) of water, with depths reaching up to 284 m (932 ft). Construction on the dam began in 1950 and was completed in 1961, before officially commissioning in 1965.
After the Second World War, growing industries needed electricity and construction on the Cleuson Dam began in 1947 and was completed in 1951. The original Dixence dam was submerged by the filling of Lac des Dix beginning in 1957, it can still be seen when the reservoir level is low. Plans for the Super Dixence Dam were now being finalized by the recently founded company, Grande Dixence SA. Construction on the Super Dixence Dam soon began later in 1950. By 1961, 3,000 workers had finished pouring 6,000,000 m^3 (210,000,000 cu ft) of concrete, completing the dam. At 285 m, it was the world's tallest dam at the time, but it was surpassed by the Nurek Dam of Tajikistan in 1972 (300 m). It remains the world's tallest gravity dam.
The Grande Dixence Dam is a 285 m (935 ft) high, 700 m (2,297 ft) long concrete gravity dam. The dam is 200 m (656 ft) wide at its base and 15 m (49 ft) wide at its crest. The dam's crest reaches an altitude of 2,365 m (7,759 ft). The dam structure contains approximately 6,000,000 m^3 (211,888,000 cu ft) of concrete. To secure the dam to the surrounding foundation, a grout curtain surrounds the dam, reaching a depth of 200 m (656 ft) and extending 100 m (328 ft) on each side of the valley.
#1229 2021-12-21 00:09:23
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1205) Coronary Angiogram
Coronary angiography
How the Test is Performed
Coronary angiography is often done along with cardiac catheterization. This is a procedure that measures pressures in the heart chambers.
The procedure most often lasts 30 to 60 minutes.
Why it's done
* Symptoms of coronary artery disease, such as chest pain (angina)
* New or increasing chest pain (unstable angina)
* A heart defect you were born with (congenital heart disease)
* Abnormal results on a noninvasive heart stress test
* Other blood vessel problems or a chest injury
* A heart valve problem that requires surgery
* Heart attack
* Stroke
* Injury to the catheterized artery
* Irregular heart rhythms (arrhythmias)
* Allergic reactions to the dye or medications used during the procedure
* Kidney damage
* Excessive bleeding
* Infection
How you prepare
* Don't eat or drink anything after midnight before your angiogram.
What you can expect
Before the procedure:
During the procedure
After the procedure
Call your doctor's office if:
* You notice bleeding, new bruising or swelling at the catheter site
* You develop increasing pain or discomfort at the catheter site
* You have signs of infection, such as redness, drainage or a fever
* Weakness or numbness in the leg or arm where the catheter was inserted
* You develop chest pain or shortness of breath
If the catheter site is actively bleeding and doesn't stop after you've applied pressure to the site, contact emergency medical services. If the catheter site suddenly begins to swell, contact emergency medical services.
* Pinpoint where blockages are located in your blood vessels
* Show how much blood flow is blocked through your blood vessels
* Check the results of previous coronary bypass surgery
* Check the blood flow through your heart and blood vessels
#1230 2021-12-22 00:11:40
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1206) Moth
Moths vary greatly in size, ranging in wingspan from about 4 mm (0.16 inch) to nearly 30 cm (about 1 foot). Highly adapted, they live in all but polar habitats. The wings, bodies, and legs of moths are covered with dustlike scales that come off if the insect is handled. Compared with butterflies, moths have stouter bodies and duller colouring. Moths also have distinctive feathery or thick antennae. When at rest, moths either fold their wings tentlike over the body, wrap them around the body, or hold them extended at their sides, whereas butterflies hold their wings vertically.
As with all lepidopterans, the moth life cycle has four stages: egg, larva (caterpillar), pupa (chrysalis), and adult (imago). The larvae and adults of most moth species are plant eaters. Larvae in particular do considerable damage to ornamental trees and shrubs and to many other plants of economic importance. The bollworm and measuring worm are two of the most destructive types of moth larvae. Some moth species (especially those of the family Tineidae, which includes the clothes moth) eat wool, fur, silk, and even feathers.
Some of the better-known moth families include: Gelechiidae, to which the destructive bollworms of cotton, corn, tomatoes, and other crops belong; Tortricidae, or leaf roller moths, which are forest pests; Lymantriidae, the tussock moths, also containing forest pests such as the gypsy moth; Arctiidae, the tiger moths, with many brightly coloured tropical species; Olethreutidae, including several destructive species such as the codling moth and the Oriental fruit moth; Noctuidae, the owlet moths, one of the largest families of lepidopterans; Saturniidae, the giant silkworm moths, containing the largest individual; and Geometridae, measuring worm moths, including the waves, pugs, and carpet moths.
Moths are a paraphyletic group of insects that includes all members of the order Lepidoptera that are not butterflies, with moths making up the vast majority of the order. There are thought to be approximately 160,000 species of moth, many of which have yet to be described. Most species of moth are nocturnal, but there are also crepuscular and diurnal species.
Differences between butterflies and moths
Although the rules for distinguishing moths from butterflies are not well established, one very good guiding principle is that butterflies have thin antennae and (with the exception of the family Hedylidae) have small balls or clubs at the end of their antennae. Moth antennae are usually feathery with no ball on the end. The divisions are named by this principle: "club-antennae" (Rhopalocera) or "varied-antennae" (Heterocera). Lepidoptera differs between butterflies and other organisms due to evolving a special characteristic of having the tube-like proboscis in the Middle Triassic which allowed them to acquire nectar from flowering plants.
The modern English word moth comes from Old English (cf. Northumbrian) from Common Germanic (compare Old Norse motti, Dutch mot, and German Motte all meaning 'moth'). Its origins are possibly related to the Old English maða meaning 'maggot' or from the root of midge which until the 16th century was used mostly to indicate the larva, usually in reference to devouring clothes.
Moths evolved long before butterflies; moth fossils have been found that may be 190 million years old. Both types of Lepidoptera are thought to have co-evolved with flowering plants, mainly because most modern species, both as adults and larvae, feed on flowering plants. One of the earliest known species that is thought to be an ancestor of moths is Archaeolepis mane. Its fossil fragments show scaled wings that are similar to caddisflies in their veining.
Significance to humans
An adult male pine processionary moth (Thaumetopoea pityocampa). This species is a serious forest pest when in its larval state. Notice the bristle springing from the underside of the hindwing (frenulum) and running forward to be held in a small catch of the forewing, whose function is to link the wings together.
Some moths, particularly their caterpillars, can be major agricultural pests in many parts of the world. Examples include corn borers and bollworms. The caterpillar of the gypsy moth (Lymantria dispar) causes severe damage to forests in the northeastern United States, where it is an invasive species. In temperate climates, the codling moth causes extensive damage, especially to fruit farms. In tropical and subtropical climates, the diamondback moth (Plutella xylostella) is perhaps the most serious pest of brassicaceous crops. Also in sub-Saharan Africa, the African sugarcane borer is a major pest of sugarcane, maize, and sorghum.
While moths are notorious for eating clothing, most species do not, and some moth adults do not even eat at all. Some, like the Luna, Polyphemus, Atlas, Promethea, cecropia, and other large moths do not have mouth parts. This is possible because they live off the food stores from when they were a caterpillar, and only live a short time as an adult (roughly a week for some species). Many species of adult moths do however eat: for instance, many will drink nectar.
Not all silk is produced by Bombyx mori. There are several species of Saturniidae that also are farmed for their silk, such as the ailanthus moth (Samia cynthia group of species), the Chinese oak silkmoth (Antheraea pernyi), the Assam silkmoth (Antheraea assamensis), and the Japanese silk moth (Antheraea yamamai).
The larvae of many species are used as food, particularly in Africa, where they are an important source of nutrition. The mopane worm, the caterpillar of Gonimbrasia belina, from the family Saturniidae, is a significant food resource in southern Africa. Another saturniid used as food is the cavorting emperor (Usta terpsichore). In one country alone, Congo, more than 30 species of moth larvae are harvested. Some are sold not only in the local village markets, but are shipped by the ton from one country to another.
Predators and parasites
Nocturnal insectivores often feed on moths; these include some bats, some species of owls and other species of birds. Moths also are eaten by some species of lizards, amphibians, cats, dogs, rodents, and some bears. Moth larvae are vulnerable to being parasitized by Ichneumonidae.
There is evidence that ultrasound in the range emitted by bats causes flying moths to make evasive maneuvers. Ultrasonic frequencies trigger a reflex action in the noctuid moth that causes it to drop a few centimeters or inches in its flight to evade attack, and tiger moths can emit clicks to foil bats' echolocation.
The fungus Ophiocordyceps sinensis infects the larvae of many different species of moths.
Ecological importance
Some studies indicate that certain species of moths, such as those belonging to the families Erebidae and Sphingidae, may be the key pollinators for some flowering plants in the Himalayan ecosystem. Recent studies have established that moths are important, but often overlooked, nocturnal pollinators of a wide range of plants.
Attraction to light
Studies have found that light pollution caused by increasing use of artificial lights has either led to a severe decline in moth population in some parts of the world or has severely disrupted nocturnal pollination.
Moth Interesting Facts
What type of animals are moths?
Moths fall in the paraphyletic group of insects, and it includes all the members of the genus Lepidoptera, except the butterfly.
What class of animals do moths belong to?
Moths are a part of the Insecta class, which is the largest group within the Arthropod phylum.
How many moths are there in the world?
There are almost 160,000 moths species worldwide, of which more than 11,000 species are found in the USA only, which makes them ten times more abundant than butterflies.
Where do moths live?
Moths can be found in almost every major part of the world like Africa, Asia, Central America, Eurasia, Europe, Oceania, North America, and South America.
What is a moth's habitat?
Moths can be easily found in quiet and dark forests and pasture lands; they can also be found in tropical settings and grasslands. A major part of the moth species is attracted to light sources as light sources confuse them. and they tend to lose their sense of direction. They usually prefer warm places. During winters, they tend to migrate south towards warmer temperatures. They are known for flying long distances during the migration period. They are adaptive in nature. Most of them are nocturnal. Some moth species look just like butterflies.
Who do moths live with?
As moths have many species and varieties, some moths like to stay alone or move in pairs like butterflies, and other moths move in large family-like structures or groups called eclipses. The living style and lifecycle of moths vary from species to species.
How long do moths live?
The lifespan varies in different moth species. The average lifespan of a moth can be estimated to be around 40 days. Some moths can live for a year. Some moths live only for days, weeks, or a month. Some moths, like the hummingbird moth and luna moth, can live up to three months. Different moths species have evolved differently.
How do they reproduce?
Moths have evolved with time. When female moths are ready for mating, they release some chemicals to attract the male moths. The male moth then follows this smell and gets attracted to the female.
When done with mating, the female moths lay their eggs on plants. On average, 100 eggs are laid. In most cases, within ten days, the eggs hatch and begin the caterpillar stage. This is when they begin to eat plants as food. To prepare for the pupal stage, caterpillars need to eat almost 2,700 times their body weight. Then, they prepare cocoons around themselves in the pupal stage after almost three weeks or a month. From the cocoons, the caterpillars finally emerge as adult moths.
What is their conservation status?
Currently, some moth species are considered Endangered, like the garden tiger and white ermine moth, as they have lost their habitats because of humans. Most species are of Least Concern.
#1231 2021-12-23 00:11:46
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1207) Glowworm
Glowworm or glow-worm is the common name for various groups of insect larvae and adult larviform females that glow through bioluminescence. They include the European common glow-worm and other members of the Lampyridae, but bioluminescence also occurs in the families Elateridae, Phengodidae, and Rhagophthalmidae among beetles; as well as members of the genera Arachnocampa, Keroplatus, and Orfelia among keroplatid fungus gnats.
Four families of beetles are bioluminescent. The wingless larviform females and larvae of these bioluminescent species are usually known as "glowworms". Winged males may or may not also exhibit bioluminescence. Their light may be emitted as flashes or as a constant glow, and usually range in color from green, yellow, to orange. The families are closely related, and are all members of the beetle superfamily, Elateroidea. Phylogenetic analyses have indicated that bioluminescence may have a single evolutionary origin among the families Lampyridae, Phengodidae, and Gymnophthalmidae; but is likely to have arisen independently among Elateridae.
* Family Elateridae – The click beetles. Of the estimated 10,000 species classified under this family, around 200 species from tropical regions of the Americas and some Melanesian islands are bioluminescent. All of them are members of the subfamily Pyrophorinae, except for one species, Campyloxenus pyrothorax, which belongs to subfamily Campyloxeninae, and Balgus schnusei, in Thylacosterninae.
* Family Lampyridae – True fireflies. Contains around 2,000 species found throughout the world. Some "glow worms" are in this family.
* Family Phengodidae – Usually known as glowworm beetles. Contains around 230 species endemic to the New World. This family also includes railroad worms, which are unique among all terrestrial bioluminescent organisms in producing red light.
* Family Rhagophthalmidae – Contains around 30 species found in Asia. The validity of this family has not been fully resolved. Rhagophthalmidae was formerly considered to be a subfamily under Phengodidae before being treated as a distinct family. Some authors[who?] now believe that it should be classified under Lampyridae.
Fungus gnats
Three genera of fungus gnats are bioluminescent, and known as "glowworms" in their larval stage. They produce a blue-green light. The larvae spin sticky webs to catch food. They are found in caves, overhangs, rock cavities, and other sheltered, wet areas. They are usually classified under the family Keroplatidae, but this is not universally accepted and some authors place them under Mycetophilidae instead. Despite the similarities in function and appearance, the bioluminescent systems of the three genera are not homologous and are believed to have evolved separately.
* Genus Orfelia – sometimes known as "dismalites". Contains a single species, Orfelia fultoni, found only in North America. Like Arachnocampa spp., their larvae may use their lights to attract prey like springtails and other small insects, but their main food is fungal spores.
* Genus Keroplatus – found in Eurasia. Unlike Arachnocampa and Orfelia, the larvae of Keroplatus feed only on fungal spores. Their bioluminescence is believed to have no function and is vestigial.
#1232 2021-12-24 00:02:47
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1208) Invertebrate
Invertebrates serve as food for humans; are key elements in food chains that support birds, fish, and many other vertebrate species; and play important roles in plant pollination. Despite providing important environmental services, invertebrates are often ancillary in wildlife research and conservation, with priority given instead to studies that focus on large vertebrates. In addition, several invertebrate groups (including many types of insects and worms) are viewed solely as pests, and by the early 21st century the heavy use of pesticides worldwide had caused substantial population declines among bees, wasps, and other terrestrial insects.
Invertebrates are animals that neither possess nor develop a vertebral column (commonly known as a backbone or spine), derived from the notochord. This includes all animals apart from the chordate subphylum Vertebrata. Familiar examples of invertebrates include arthropods (insects, arachnids, crustaceans, and myriapods), mollusks (chitons, snails, bivalves, squids, and octopuses), annelid (earthworms and leeches), and cnidarians (hydras, jellyfishes, sea anemones, and corals).
The majority of animal species are invertebrates; one estimate puts the figure at 97%. Many invertebrate taxa have a greater number and variety of species than the entire subphylum of Vertebrata. Invertebrates vary widely in size, from 50 μm (0.002 in) rotifers to the 9–10 m (30–33 ft) colossal squid.
Some so-called invertebrates, such as the Tunicata and Cephalochordata, are more closely related to vertebrates than to other invertebrates. This makes the invertebrates paraphyletic, so the term has little meaning in taxonomy.
Taxonomic significance
The term invertebrates is not always precise among non-biologists since it does not accurately describe a taxon in the same way that Arthropoda, Vertebrata or Manidae do. Each of these terms describes a valid taxon, phylum, subphylum or family. "Invertebrata" is a term of convenience, not a taxon; it has very little circumscriptional significance except within the Chordata. The Vertebrata as a subphylum comprises such a small proportion of the Metazoa that to speak of the kingdom Animalia in terms of "Vertebrata" and "Invertebrata" has limited practicality. In the more formal taxonomy of Animalia other attributes that logically should precede the presence or absence of the vertebral column in constructing a cladogram, for example, the presence of a notochord. That would at least circumscribe the Chordata. However, even the notochord would be a less fundamental criterion than aspects of embryological development and symmetry or perhaps bauplan.
Despite this, the concept of invertebrates as a taxon of animals has persisted for over a century among the laity, and within the zoological community and in its literature it remains in use as a term of convenience for animals that are not members of the Vertebrata. The following text reflects earlier scientific understanding of the term and of those animals which have constituted it. According to this understanding, invertebrates do not possess a skeleton of bone, either internal or external. They include hugely varied body plans. Many have fluid-filled, hydrostatic skeletons, like jellyfish or worms. Others have hard exoskeletons, outer shells like those of insects and crustaceans. The most familiar invertebrates include the Protozoa, Porifera, Coelenterata, Platyhelminthes, Nematoda, Annelida, Echinodermata, Mollusca and Arthropoda. Arthropoda include insects, crustaceans and arachnids.
The trait that is common to all invertebrates is the absence of a vertebral column (backbone): this creates a distinction between invertebrates and vertebrates. The distinction is one of convenience only; it is not based on any clear biologically homologous trait, any more than the common trait of having wings functionally unites insects, bats, and birds, or than not having wings unites tortoises, snails and sponges. Being animals, invertebrates are heterotrophs, and require sustenance in the form of the consumption of other organisms. With a few exceptions, such as the Porifera, invertebrates generally have bodies composed of differentiated tissues. There is also typically a digestive chamber with one or two openings to the exterior.
Morphology and symmetry
The body plans of most multicellular organisms exhibit some form of symmetry, whether radial, bilateral, or spherical. A minority, however, exhibit no symmetry. One example of asymmetric invertebrates includes all gastropod species. This is easily seen in snails and sea snails, which have helical shells. Slugs appear externally symmetrical, but their pneumostome (breathing hole) is located on the right side. Other gastropods develop external asymmetry, such as Glaucus atlanticus that develops asymmetrical cerata as they mature. The origin of gastropod asymmetry is a subject of scientific debate.
Other examples of asymmetry are found in fiddler crabs and hermit crabs. They often have one claw much larger than the other. If a male fiddler loses its large claw, it will grow another on the opposite side after moulting. Sessile animals such as sponges are asymmetrical alongside coral colonies (with the exception of the individual polyps that exhibit radial symmetry); alpheidae claws that lack pincers; and some copepods, polyopisthocotyleans, and monogeneans which parasitize by attachment or residency within the gill chamber of their fish hosts).
Nervous system
Neurons differ in invertebrates from mammalian cells. Invertebrates cells fire in response to similar stimuli as mammals, such as tissue trauma, high temperature, or changes in pH. The first invertebrate in which a neuron cell was identified was the medicinal leech, Hirudo medicinalis.
Learning and memory using nociceptors in the sea hare, Aplysia has been described. Mollusk neurons are able to detect increasing pressures and tissue trauma.
Neurons have been identified in a wide range of invertebrate species, including annelids, molluscs, nematodes and arthropods.
Respiratory system
Like vertebrates, most invertebrates reproduce at least partly through sexual reproduction. They produce specialized reproductive cells that undergo meiosis to produce smaller, motile spermatozoa or larger, non-motile ova. These fuse to form zygotes, which develop into new individuals. Others are capable of asexual reproduction, or sometimes, both methods of reproduction.
Social interaction
Social behavior is widespread in invertebrates, including math, termites, aphids, thrips, ants, bees, Passalidae, Acari, spiders, and more. Social interaction is particularly salient in eusocial species but applies to other invertebrates as well.
Insects recognize information transmitted by other insects.
The term invertebrates covers several phyla. One of these are the sponges (Porifera). They were long thought to have diverged from other animals early. They lack the complex organization found in most other phyla. Their cells are differentiated, but in most cases not organized into distinct tissues. Sponges typically feed by drawing in water through pores. Some speculate that sponges are not so primitive, but may instead be secondarily simplified. The Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the rear. Both have distinct tissues, but they are not organized into organs. There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, they are sometimes called diploblastic.
The Echinodermata are radially symmetric and exclusively marine, including starfish (Asteroidea), sea urchins, (Echinoidea), brittle stars (Ophiuroidea), sea cucumbers (Holothuroidea) and feather stars (Crinoidea).
The largest animal phylum is also included within invertebrates: the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. In addition, they possess a hardened exoskeleton that is periodically shed during growth. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share these traits. The Nematoda or roundworms, are perhaps the second largest animal phylum, and are also invertebrates. Roundworms are typically microscopic, and occur in nearly every environment where there is water. A number are important parasites. Smaller phyla related to them are the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Other invertebrates include the Nemertea or ribbon worms, and the Sipuncula.
Another phylum is Platyhelminthes, the flatworms. These were originally considered primitive, but it now appears they developed from more complex ancestors. Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha. The Rotifera or rotifers, are common in aqueous environments. Invertebrates also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and the Cycliophora.
Among lesser phyla of invertebrates are the Hemichordata, or acorn worms, and the Chaetognatha, or arrow worms. Other phyla include Acoelomorpha, Brachiopoda, Bryozoa, Entoprocta, Phoronida, and Xenoturbellida.
The earliest animal fossils appear to be those of invertebrates. 665-million-year-old fossils in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia have been interpreted as being early sponges. Some paleontologists suggest that animals appeared much earlier, possibly as early as 1 billion years ago though they probably became multicellular in the Tonian. Trace fossils such as tracks and burrows found in the late Neoproterozoic era indicate the presence of triploblastic worms, like metazoans, roughly as large (about 5 mm wide) and complex as earthworms.
Around 453 MYA, animals began diversifying, and many of the important groups of invertebrates diverged from one another. Fossils of invertebrates are found in various types of sediment from the Phanerozoic. Fossils of invertebrates are commonly used in stratigraphy.
Carl Linnaeus divided these animals into only two groups, the Insecta and the now-obsolete Vermes (worms). Jean-Baptiste Lamarck, who was appointed to the position of "Curator of Insecta and Vermes" at the Muséum National d'Histoire Naturelle in 1793, both coined the term "invertebrate" to describe such animals and divided the original two groups into ten, by splitting Arachnida and Crustacea from the Linnean Insecta, and Mollusca, Annelida, Cirripedia, Radiata, Coelenterata and Infusoria from the Linnean Vermes. They are now classified into over 30 phyla, from simple organisms such as sea sponges and flatworms to complex animals such as arthropods and molluscs.
Significance of the group
Invertebrates are animals without a vertebral column. This has led to the conclusion that invertebrates are a group that deviates from the normal, vertebrates. This has been said to be because researchers in the past, such as Lamarck, viewed vertebrates as a "standard": in Lamarck's theory of evolution, he believed that characteristics acquired through the evolutionary process involved not only survival, but also progression toward a "higher form", to which humans and vertebrates were closer than invertebrates were. Although goal-directed evolution has been abandoned, the distinction of invertebrates and vertebrates persists to this day, even though the grouping has been noted to be "hardly natural or even very sharp." Another reason cited for this continued distinction is that Lamarck created a precedent through his classifications which is now difficult to escape from. It is also possible that some humans believe that, they themselves being vertebrates, the group deserves more attention than invertebrates. In any event, in the 1968 edition of 'Invertebrate Zoology', it is noted that "division of the Animal Kingdom into vertebrates and invertebrates is artificial and reflects human bias in favor of man's own relatives." The book also points out that the group lumps a vast number of species together, so that no one characteristic describes all invertebrates. In addition, some species included are only remotely related to one another, with some more related to vertebrates than other invertebrates.
In research
For many centuries, invertebrates were neglected by biologists, in favor of big vertebrates and "useful" or charismatic species. Invertebrate biology was not a major field of study until the work of Linnaeus and Lamarck in the 18th century. During the 20th century, invertebrate zoology became one of the major fields of natural sciences, with prominent discoveries in the fields of medicine, genetics, palaeontology, and ecology. The study of invertebrates has also benefited law enforcement, as arthropods, and especially insects, were discovered to be a source of information for forensic investigators.
Two of the most commonly studied model organisms nowadays are invertebrates: the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans. They have long been the most intensively studied model organisms, and were among the first life-forms to be genetically sequenced. This was facilitated by the severely reduced state of their genomes, but many genes, introns, and linkages have been lost. Analysis of the starlet sea anemone genome has emphasised the importance of sponges, placozoans, and choanoflagellates, also being sequenced, in explaining the arrival of 1500 ancestral genes unique to animals. Invertebrates are also used by scientists in the field of aquatic biomonitoring to evaluate the effects of water pollution and climate change.
#1233 2021-12-25 00:19:36
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1209) Tissue
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Tissue, in physiology, is a level of organization in multicellular organisms; it consists of a group of structurally and functionally similar cells and their intercellular material.
By definition, tissues are absent from unicellular organisms. Even among the simplest multicellular species, such as sponges, tissues are lacking or are poorly differentiated. But multicellular animals and plants that are more advanced have specialized tissues that can organize and regulate an organism’s response to its environment.
Bryophytes (liverworts, hornworts, and mosses) are nonvascular plants; i.e., they lack vascular tissues (phloem and xylem) as well as true leaves, stems, and roots. Instead bryophytes absorb water and nutrients directly through leaflike and stemlike structures or through cells comprising the gametophyte body.
In vascular plants, such as angiosperms and gymnosperms, cell division takes place almost exclusively in specific tissues known as meristems. Apical meristems, which are located at the tips of shoots and roots in all vascular plants, give rise to three types of primary meristems, which in turn produce the mature primary tissues of the plant. The three kinds of mature tissues are dermal, vascular, and ground tissues. Primary dermal tissues, called epidermis, make up the outer layer of all plant organs (e.g., stems, roots, leaves, flowers). They help deter excess water loss and invasion by insects and microorganisms. The vascular tissues are of two kinds: water-transporting xylem and food-transporting phloem. Primary xylem and phloem are arranged in vascular bundles that run the length of the plant from roots to leaves. The ground tissues, which comprise the remaining plant matter, include various support, storage, and photosynthetic tissues.
Early in the evolutionary history of animals, tissues became aggregated into organs, which themselves became divided into specialized parts. An early scientific classification of tissues divided them on the basis of the organ system of which they formed a part (e.g., nervous tissues). Embryologists have often classified tissues on the basis of their origin in the developing embryo; i.e., ectodermal, endodermal, and mesodermal tissues. Another method classified tissues into four broad groups according to cell composition: epithelial tissues, composed of cells that make up the body’s outer covering and the membranous covering of internal organs, cavities, and canals; endothelial tissues, composed of cells that line the inside of organs; stroma tissues, composed of cells that serve as a matrix in which the other cells are embedded; and connective tissues, a rather amorphous category composed of cells and an extracellular matrix that serve as a connection from one tissue to another.
The most useful of all systems, however, breaks down animal tissues into four classes based on the functions that the tissues perform. The first class includes all those tissues that serve an animal’s needs for growth, repair, and energy; i.e., the assimilation, storage, transport, and excretion of nutrients and waste products. In humans, these tissues include the alimentary (or digestive) tract, kidneys, liver, and lungs. The digestive tract leads (in vertebrates) from the mouth through the pharynx, stomach, and intestines to the math. In vertebrates and some larger invertebrates, oxygen and the nutrients secured by the alimentary tissues or liberated from storage tissues are transported throughout the body by the blood and lymph, which are themselves considered by many to be tissues. Tissues that secure oxygen and excrete carbon dioxide are extremely variable in the animal kingdom. In many invertebrates, gas exchange takes place through the body wall or external gills, but in species adapted to a terrestrial life, an internal sac capable of expansion and contraction served this purpose, and gradually became more complex over evolutionary time as animals’ demand for oxygen increased.
The second class of tissues consists of those used in coordination. There are basically two types: physical (nervous and sensory tissues), which operate via electrical impulses along nerve fibres; and the chemical (endocrine tissues), which release hormones into the bloodstream. In invertebrates, both physical and chemical coordination are performed by the same tissues, because the nervous tissues also serve as hormone sources. In vertebrates, most endocrine functions are isolated in specialized glands, several of which are derived from nervous tissue.
The basic unit of all nervous tissue is the neuron, aggregations of which are called ganglia. The bundles of axons along which neurons transmit and receive impulses are called nerves. By comparison, chemical control by hormones is much slower and longer-acting. In many invertebrates, chemical stimulators are secreted by the neurons themselves and then move to their site of action along the axon. In higher vertebrates, the principal endocrine tissues are the thyroid, parathyroid, pituitary, and endocrine constituents of the pancreas and adrenal glands.
The third class of tissues includes those contributing to the body’s support and movement. The connective tissues proper surround organs, bones, and muscles, helping to hold them together. Connective tissues proper consist of cells embedded in a matrix composed of an amorphous ground substance and collagen, elastic, and reticular fibres. Tendons and ligaments are examples of extremely strong connective tissues proper. The other major structural tissues are cartilage and bone, which, like connective tissues proper, consist of cells embedded in an intercellular matrix. In cartilage the matrix is firm but rubbery; in bone the matrix is rigid, being impregnated by hard crystals of inorganic salts. Muscle tissue is primarily responsible for movement; it consists of contractile cells. There are two general types of muscle: striated muscle, which moves the skeleton and is under voluntary control; and smooth muscle, which surrounds the walls of many internal organs and cannot normally be controlled voluntarily.
A fourth class of tissues includes reproductive tissues, hemopoietic tissues, and tissue fluids. The most important reproductive tissues are the gonads (ovaries and testes), which produce the gametes (eggs and sperm, respectively). Hemopoietic tissues produce the cellular components of the blood. Among the important tissue fluids are lymph, cerebrospinal fluid, and milk (in mammals).
#1234 2021-12-26 01:07:41
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1210) Peritoneum
The omentum and mesentery contain blood vessels, nerves, lymph nodes, fat, elastic fibres for stretching, and collagen fibres for strength. The omentum is thinner than the mesentery and is lacy in appearance. It contains large quantities of fat that serve to keep the organs warm. The mesentery is fan-shaped and well-supplied with blood vessels that radiate to the intestine.
The functions of these membranes are to prevent friction between closely packed organs by secreting serum that acts as a lubricant, to help hold the abdominal organs in their proper positions, to separate and unite organs, and to guard as a barrier against infection.
Peritonitis, an inflammation of the peritoneum, results from bacteria entering a perforation in the gastrointestinal tract. A ruptured appendix is a common cause of peritonitis. Symptoms include abdominal pain, vomiting, and fever. If antibiotics do not prove successful, surgery may be necessary to remove the source of the infection entirely.
The peritoneum is the serous membrane forming the lining of the abdominal cavity or coelom in amniotes and some invertebrates, such as annelids. It covers most of the intra-abdominal (or coelomic) organs, and is composed of a layer of mesothelium supported by a thin layer of connective tissue. This peritoneal lining of the cavity supports many of the abdominal organs and serves as a conduit for their blood vessels, lymphatic vessels, and nerves.
The abdominal cavity (the space bounded by the vertebrae, abdominal muscles, diaphragm, and pelvic floor) is different from the intraperitoneal space (located within the abdominal cavity but wrapped in peritoneum). The structures within the intraperitoneal space are called "intraperitoneal" (e.g., the stomach and intestines), the structures in the abdominal cavity that are located behind the intraperitoneal space are called "retroperitoneal" (e.g., the kidneys), and those structures below the intraperitoneal space are called "subperitoneal" or "infraperitoneal" (e.g., the bladder).
The peritoneum is one continuous sheet, forming two layers and a potential space between them: the peritoneal cavity.
The outer layer, the parietal peritoneum, is attached to the abdominal wall and the pelvic walls. The tunica vaginalis, the serous membrane covering the male testis, is derived from the vaginal process, an outpouching of the parietal peritoneum.
The inner layer, the visceral peritoneum, is wrapped around the visceral organs, located inside the intraperitoneal space for protection. It is thinner than the parietal peritoneum. The mesentery is a double layer of visceral peritoneum that attaches to the gastrointestinal tract. There are often blood vessels, nerves, and other structures between these layers. The space between these two layers is technically outside of the peritoneal sac, and thus not in the peritoneal cavity.
The potential space between these two layers is the peritoneal cavity, filled with a small amount (about 50 mL) of slippery serous fluid that allows the two layers to slide freely over each other.
Peritoneal folds are omenta, mesenteries and ligaments; they connect organs to each other or to the abdominal wall. There are two main regions of the peritoneal cavity, connected by the omental foramen.
* The greater sac.
* The lesser sac. The lesser sac is divided into two "omenta":
* (i) The lesser omentum (or gastrohepatic) is attached to the lesser curvature of the stomach and the liver.
* (ii) The greater omentum (or gastrocolic) hangs from the greater curve of the stomach and loops down in front of the intestines before curving back upwards to attach to the transverse colon.
In effect it is draped in front of the intestines like an apron and may serve as an insulating or protective layer.
Peritoneal folds develop from the ventral and dorsal mesentery of the embryo.
Clinical significance:
Peritoneal dialysis
In one form of dialysis, called peritoneal dialysis, a glucose solution is sent through a tube into the peritoneal cavity. The fluid is left there for a prescribed amount of time to absorb waste products, and then removed through the tube. The reason for this effect is the high number of arteries and veins in the peritoneal cavity. Through the mechanism of diffusion, waste products are removed from the blood.
Peritonitis is the inflammation of the peritoneum. It is more commonly associated to infection from a punctured organ of the abdominal cavity. It can also be provoked by the presence of fluids that produce chemical irritation, such as gastric acid or pancreatic juice. Peritonitis causes fever, tenderness, and pain in the abdominal area, which can be localized or diffuse. The treatment involves rehydration, administration of antibiotics, and surgical correction of the underlying cause. Mortality is higher in the elderly and if present for a prolonged time.
Primary peritoneal carcinoma
Primary peritoneal cancer is a cancer of the cells lining the peritoneum.
#1235 2021-12-27 00:03:59
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1211) Tympanic membrane
Tympanic membrane, also called eardrum, is a thin layer of tissue in the human ear that receives sound vibrations from the outer air and transmits them to the auditory ossicles, which are tiny bones in the tympanic (middle-ear) cavity. It also serves as the lateral wall of the tympanic cavity, separating it from the external auditory canal. The membrane lies across the end of the external canal and looks like a flattened cone with its tip (apex) pointed inward. The edges are attached to a ring of bone, the tympanic annulus.
The drum membrane has three layers: the outer layer, continuous with the skin on the external canal; the inner layer, continuous with the mucous membrane lining the middle ear; and, between the two, a layer of radial and circular fibres that give the membrane its tension and stiffness. The membrane is well supplied with blood vessels, and its sensory nerve fibres make it extremely sensitive to pain.
Orientation and relations
The tympanic membrane is oriented obliquely in the anteroposterior, mediolateral, and superoinferior planes. Consequently, its superoposterior end lies lateral to its anteroinferior end.
Anatomically, it relates superiorly to the middle cranial fossa, posteriorly to the ossicles and facial nerve, inferiorly to the parotid gland, and anteriorly to the temporomandibular joint.
The eardrum is divided into two general regions: the pars flaccida and the pars tensa. The relatively fragile pars flaccida lies above the lateral process of the malleus between the notch of Rivinus and the anterior and posterior malleal folds. Consisting of two layers and appearing slightly pinkish in hue, it is associated with Eustachian tube dysfunction and cholesteatomas.
The larger pars tensa consists of three layers: skin, fibrous tissue, and mucosa. Its thick periphery forms a fibrocartilaginous ring called the annulus tympanicus or Gerlach's ligament. while the central umbo tents inward at the level of the tip of malleus. The middle fibrous layer, containing radial, circular, and parabolic fibers, encloses the handle of malleus. Though comparatively robust, the pars tensa is the region more commonly associated with[vague] perforations.
The manubrium (Latin: handle) of the malleus is firmly attached to the medial surface of the membrane as far as its center, drawing it toward the tympanic cavity. The lateral surface of the membrane is thus concave. The most depressed aspect of this concavity is termed the umbo (Latin: shield boss).
Nerve supply
Sensation of the outer surface of the tympanic membrane is supplied mainly by the auriculotemporal nerve, a branch of the mandibular nerve (cranial nerve V3), with contributions from the auricular branch of the vagus nerve (cranial nerve X), the facial nerve (cranial nerve VII), and possibly the glossopharyngeal nerve (cranial nerve IX). The inner surface of the tympanic membrane is innervated by the glossopharyngeal nerve.
Clinical significance:
When the eardrum is illuminated during a medical examination, a cone of light radiates from the tip of the malleus to the periphery in the anteroinferior quadrant, this is what is known clinically as 5 o'clock.
Unintentional perforation (rupture) has been described in blast injuries and air travel, typically in patients experiencing upper respiratory congestion that prevents equalization of pressure in the middle ear. It is also known to occur in swimming, diving (including scuba diving), and martial arts.
Patients suffering from tympanic membrane rupture may experience bleeding, tinnitus, hearing loss, or disequilibrium (vertigo). However, they rarely require medical intervention, as between 80 and 95 percent of ruptures recover completely within two to four weeks. The prognosis becomes more guarded as the force of injury increases.
Surgical puncture for treatment of middle ear infections
In some cases, the pressure of fluid in an infected middle ear is great enough to cause the eardrum to rupture naturally. Usually, this consists of a small hole (perforation), from which fluid can drain.
Society and culture
#1236 2021-12-28 00:02:34
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1212) Geothermal energy
Geothermal energy is the thermal energy in the Earth's crust which originates from the formation of the planet and from radioactive decay of materials in currently uncertain but possibly roughly equal proportions. The high temperature and pressure in Earth's interior cause some rock to melt and solid mantle to behave plastically, resulting in parts of the mantle convecting upward since it is lighter than the surrounding rock and temperatures at the core–mantle boundary can reach over 4000 °C (7200 °F).
Geothermal heating, for example using water from hot springs has been used for bathing since Paleolithic times and for space heating since ancient Roman times, however more recently geothermal power, the term used for generation of electricity from geothermal energy, has gained in importance. It is estimated that the earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, although only a very small fraction is currently being profitably exploited, often in areas near tectonic plate boundaries.
As a result of government assisted research and industry experience, the cost of generating geothermal power decreased by 25% over the 1980s and 1990s. More recent technological advances have dramatically reduced costs and thereby expanded the range and size of viable resource and in 2021 the U.S. Department of Energy estimates that geothermal energy from a power plant "built today" costs about $0.05/kWh.
Worldwide, 13,900 megawatts (MW) of geothermal power was available in 2019. An additional 28 gigawatts of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications as of 2010.
Forecasts for the future of geothermal power depend on assumptions about technology, energy prices, subsidies, plate boundary movement and interest rates. Pilot programs like EWEB's customer opt in Green Power Program show that customers would be willing to pay a little more for a renewable energy source like geothermal. About 100 thousand people are employed in the industry. The adjective geothermal originates from the Greek roots, meaning Earth, and meaning hot.
Heat from Earth’s interior generates surface phenomena such as lava flows, geysers, fumaroles, hot springs, and mud pots. The heat is produced mainly by the radioactive decay of potassium, thorium, and uranium in Earth’s crust and mantle and also by friction generated along the margins of continental plates. The subsequent annual low-grade heat flow to the surface averages between 50 and 70 milliwatts (mW) per square metre worldwide. In contrast, incoming solar radiation striking Earth’s surface provides 342 watts per square metre annually. Geothermal heat energy can be recovered and exploited for human use, and it is available anywhere on Earth’s surface. The estimated energy that can be recovered and utilized on the surface is {4.5} × {10}^6 exajoules, or about {1.4} × {10}^6 terawatt-years, which equates to roughly three times the world’s annual consumption of all types of energy.
The amount of usable energy from geothermal sources varies with depth and by extraction method. The increase in temperature of rocks and other materials underground averages 20–30 °C (36–54 °F) per kilometre (0.6 mile) depth worldwide in the upper part of the lithosphere, and this rate of increase is much higher in most of Earth’s known geothermal areas. Normally, heat extraction requires a fluid (or steam) to bring the energy to the surface. Locating and developing geothermal resources can be challenging. This is especially true for the high-temperature resources needed for generating electricity. Such resources are typically limited to parts of the world characterized by recent volcanic activity or located along plate boundaries or within crustal hot spots. Even though there is a continuous source of heat within Earth, the extraction rate of the heated fluids and steam can exceed the replenishment rate, and, thus, use of the resource must be managed sustainably.
Direct uses
Probably the most widely used set of applications involves the direct use of heated water from the ground without the need for any specialized equipment. All direct-use applications make use of low-temperature geothermal resources, which range between about 50 and 150 °C (122 and 302 °F). Such low-temperature geothermal water and steam have been used to warm single buildings, as well as whole districts where numerous buildings are heated from a central supply source. In addition, many swimming pools, balneological (therapeutic) facilities at spas, greenhouses, and aquaculture ponds around the world have been heated with geothermal resources. Other direct uses of geothermal energy include cooking, industrial applications (such as drying fruit, vegetables, and timber), milk pasteurization, and large-scale snow melting. For many of those activities, hot water is often used directly in the heating system, or it may be used in conjunction with a heat exchanger, which transfers heat when there are problematic minerals and gases such as hydrogen sulfide mixed in with the fluid.
Geothermal heat pumps
Geothermal heat pumps (GHPs) take advantage of the relatively stable moderate temperature conditions that occur within the first 300 metres (1,000 feet) of the surface to heat buildings in the winter and cool them in the summer. In that part of the lithosphere, rocks and groundwater occur at temperatures between 5 and 30 °C (41 and 86 °F). At shallower depths, where most GHPs are found, such as within 6 metres (about 20 feet) of Earth’s surface, the temperature of the ground maintains a near-constant temperature of 10 to 16 °C (50 to 60 °F). Consequently, that heat can be used to help warm buildings during the colder months of the year when the air temperature falls below that of the ground. Similarly, during the warmer months of the year, warm air can be drawn from a building and circulated underground, where it loses much of its heat and is returned.
A GHP system is made up of a heat exchanger (a loop of pipes buried in the ground) and a pump. The heat exchanger transfers heat energy between the ground and air at the surface by means of a fluid that circulates through the pipes; the fluid used is often water or a combination of water and antifreeze. During warmer months, heat from warm air is transferred to the heat exchanger and into the fluid. As it moves through the pipes, the heat is dispersed to the rocks, soil, and groundwater. The pump is reversed during the colder months. Heat energy stored in the relatively warm ground raises the temperature of the fluid. The fluid then transfers this energy to the heat pump, which warms the air inside the building.
GHPs have several advantages over more conventional heating and air-conditioning systems. They are very efficient, using 25–50 percent less electricity than comparable conventional heating and cooling systems, and they produce less pollution. The reduction in energy use associated with GHPs can translate into as much as a 44 percent decrease in greenhouse gas emissions compared with air-source heat pumps (which transfer heat between indoor and outdoor air). In addition, when compared with electric resistance heating systems (which convert electricity to heat) coupled with standard air-conditioning systems, GHPs can produce up to 72 percent less greenhouse gas emissions.
Electric power generation
Depending upon the temperature and the fluid (steam) flow, geothermal energy can be used to generate electricity. Geothermal power plants can produce electricity in three ways. Despite their differences in design, all three control the behaviour of steam and use it to drive electrical generators. Given that the excess water vapour at the end of each process is condensed and returned to the ground, where it is reheated for later use, geothermal power is considered a form of renewable energy.
Some geothermal power plants simply collect rising steam from the ground. In such “dry steam” operations, the heated water vapour is funneled directly into a turbine that drives an electrical generator. Other power plants, built around the flash steam and binary cycle designs, use a mixture of steam and heated water (“wet steam”) extracted from the ground to start the electrical generation process.
In flash steam power plants, pressurized high-temperature water is drawn from beneath the surface into containers at the surface, called flash tanks, where the sudden decrease in pressure causes the liquid water to “flash,” or vaporize, into steam. The steam is then used to power the turbine-generator set. In contrast, binary-cycle power plants use steam driven off a secondary working fluid (such as ammonia and hydrocarbons) contained within a closed loop of pipes to power the turbine-generator set. In this process, geothermally heated water is drawn up through a different set of pipes, and much of the energy stored in the heated water is transferred to the working fluid through a heat exchanger. The working fluid then vaporizes. After the vapour from the working fluid passes through the turbine, it is recondensed and piped back to the heat exchanger.
Electrical power usually requires water heated above 175 °C (347 °F) to be economical. In geothermal plants using the Organic Rankine Cycle (ORC), a special type of binary-cycle technology that utilizes lower-temperature heat sources (such as biomass combustion and industrial waste heat), water temperatures as low as 85–90 °C (185–194 °F) may be used.
Geothermal energy from natural pools and hot springs has long been used for cooking, bathing, and warmth. There is evidence that Native Americans used geothermal energy for cooking as early as 10,000 years ago. In ancient times, baths heated by hot springs were used by the Greeks and Romans, and examples of geothermal space heating date at least as far back as the Roman city of Pompeii during the 1st century CE. Such uses of geothermal energy were initially limited to sites where hot water and steam were accessible.
Although the world’s first district heating system was installed at Chaudes-Aigues, France, in the 14th century, it was not until the late 19th century that other cities, as well as industries, began to realize the economic potential of geothermal resources. Geothermal heat was delivered to the first residences in the United States in 1892, to Warm Springs Avenue in Boise, Idaho, and most of the city used geothermal heat by 1970. The largest and most-famous geothermal district heating system is in Reykjavík, Iceland, where 99 percent of the city received geothermal water for space heating starting in the 1930s. Early industrial direct-use applications included the extraction of borate compounds from geothermal fluids at Larderello, Italy, during the early 19th century.
The first geothermal electric power generation also took place in Larderello, with the development of an experimental plant in 1904. The first commercial use of that technology occurred there in 1913 with the construction of a plant that produced 250 kilowatts (kW). Geothermal power plants were commissioned in New Zealand starting in 1958 and at the Geysers in northern California in 1960. The Italian and American plants were dry steam facilities, where low-permeability reservoirs produced only steam. In New Zealand, however, high-temperature and high-pressure water emerges naturally as a mixture made up of 80 percent superheated water and 20 percent steam. The steam coming directly from the ground is used for power generation right away. It is sent to the power plant through pipes. In contrast, the superheated water from the ground is separated from the mixture and flashed into steam. Most geothermal plants at present are of this latter “wet steam” type.
By 2015 more than 80 countries were using geothermal energy, either directly or in conjunction with GHPs, the leaders being China, Turkey, Iceland, Japan, Hungary, and the United States. The total worldwide installed capacity for direct use in 2015 was about 73,290 megawatts thermal (MWt) utilizing about 163,273 gigawatt-hours per year (587,786 terajoules per year), producing an annual utilization factor—the annual energy produced by the plant (in megawatt-hours) divided by the installed capacity of the plant (in megawatts [MW]) multiplied by 8,760 hours—of 28 percent in the heating mode.
Geothermal energy was used to produce electricity in 24 countries in the early 21st century, the leaders being the United States, the Philippines, Indonesia, Mexico, New Zealand, and Italy. In 2016 the total worldwide installed capacity for electrical power generation was about 13,400 MW, producing about 75,000 gigawatt-hours per year for a utilization factor of 71 percent (equivalent to 6,220 full-load operating hours annually). Many geothermal fields have utilization factors around 95 percent (equivalent to 8,322 full-load operating hours annually), the highest for any form of renewable energy. The “waste” fluid from the power plant is often used for lower-temperature applications, such as the bottom cycle in a binary-cycle plant, before being injected back into the reservoir. Such cascaded uses can be found in the United States, Iceland, and Germany.
Geothermal energy is best found in areas with high thermal gradients. Those gradients occur in regions affected by recent volcanism, in areas located along plate boundaries (such as along the Pacific Ring of Fire), or in areas marked by thin crust (hot spots) such as Yellowstone National Park and the Hawaiian Islands. Geothermal reservoirs associated with those regions must have a heat source, adequate water recharge, a reservoir with adequate permeability or faults that allow fluids to rise close to the surface, and an impermeable caprock to prevent the escape of the heat. In addition, such reservoirs must be economically accessible (that is, within the range of drills).
The heated fluid from a geothermal resource is tapped by drilling wells, sometimes as deep as 9,100 metres (about 30,000 feet), and is extracted by pumping or by natural artesian flow (where the weight of the water forces it to the surface). Water and steam are then piped to the power plant to generate electricity or through insulated pipelines—which may be buried or placed aboveground—for use in heating and cooling applications. In general, electric power plant pipelines are limited to roughly 1.6 km (1 mile) in length to minimize heat loss in the steam. However, direct-use pipelines spanning several tens of kilometres have been installed with a temperature loss of less than 2–5 °C (3.6–9 °F), depending on the flow rate. The most economically efficient facilities are located close to the geothermal resource to minimize the expense of constructing long pipelines. In the case of electric power generation, costs can be kept down by locating the facility near electrical transmission lines to transmit the electricity to market.
Geothermal resources can be exhausted if the rate of heat extraction exceeds the rate of natural heat recharge. Normally, geothermal resources can be used for 20 to 30 years; however, the energy output may decrease with time, making continued development uneconomical. On the other hand, geothermal electric power has been produced continually from the Larderello geothermal field since the early 1900s and at the Geysers since 1960. Although there has been a decline in both of those fields, this problem has been partially overcome by drilling new wells and by recharging the water supply. At the Geysers, electrical capacity declined from 1,800 MW to approximately 1,000 MW, but about 200 MW of capacity was returned by placing the field under one operator and constructing pipelines to deliver wastewater for recharging the reservoir. Projects such as the Reykjavík district heating system have been operating since the 1930s with little change in the output, and the Oregon Institute of Technology geothermal heating system has been operating since the 1950s with no change in production. Thus, with proper management, geothermal resources can be sustainable for many years, and they can even recover if use is suspended for a period of time.
Environmental effects and economic costs
The environmental effects of geothermal development and power generation include the changes in land use associated with exploration and plant construction, noise and sight pollution, the discharge of water and gases, the production of foul odours, and soil subsidence. Most of those effects, however, can be mitigated with current technology so that geothermal uses have no more than a minimal impact on the environment. For example, Klamath Falls, Oregon, has approximately 600 geothermal wells for residential space heating. The city has also invested in a district heating system and a downtown snow-melting system, and it provides heating to local businesses. However, none of the systems used to supply and deliver geothermal energy are visible in town.
In addition, GHPs have a very minimal effect on the environment, because they make use of shallow geothermal resources within 100 metres (about 330 feet) of the surface. GHPs cause only small temperature changes to the groundwater or rocks and soil in the ground. In closed-loop systems the ground temperature around the vertical boreholes is slightly increased or decreased; the direction of the temperature change is governed by whether the system is dominated by heating (which would be the case in colder regions) or cooling (which would be the case in warmer regions). With balanced heating and cooling loads, the ground temperatures will remain stable. Likewise, open-loop systems using groundwater or lake water would have very little effect on temperature, especially in regions characterized by high groundwater flows.
Energy & Fossil Fuels
From fossil fuels and solar power to Thomas Edison and Nicola Tesla’s electric marvels, the world runs on energy. Harness your natural resources and test your knowledge of energy in this quiz.
Comparing the benefits of geothermal energy with other renewable energy sources, the main advantage of geothermal energy is that its base load is available 24 hours per day, 7 days per week, whereas solar and wind are available only about one-third of the time. In addition, the cost of geothermal energy varies between 5 and 10 cents per kilowatt-hour, which can be competitive with other energy sources, such as coal. The main disadvantage of geothermal energy development is the high initial investment cost in constructing the facilities and infrastructure and the high risk of proving the resources. (Geothermal resources in low-permeability rocks are often found, and exploration activities often drill “dry” holes—that is, holes that produce steam in amounts too low to be exploited economically.) However, once the resource is proven, the annual cost of fuel (that is, hot water and steam) is low and tends not to escalate in price.
#1237 2021-12-29 00:06:25
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1213) Cochlea
The cochlea is the part of the inner ear involved in hearing. It is a spiral-shaped cavity in the bony labyrinth, in humans making 2.75 turns around its axis, the modiolus. A core component of the cochlea is the Organ of Corti, the sensory organ of hearing, which is distributed along the partition separating the fluid chambers in the coiled tapered tube of the cochlea.
The name cochlea derives from Ancient Greek 'spiral, snail shell'.
The cochlea (plural is cochleae) is a spiraled, hollow, conical chamber of bone, in which waves propagate from the base (near the middle ear and the oval window) to the apex (the top or center of the spiral). The spiral canal of the cochlea is a section of the bony labyrinth of the inner ear that is approximately 30 mm long and makes 2¾ turns about the modiolus. The cochlear structures include:
Three scalae or chambers:
(a) * the vestibular duct or scala vestibuli (containing perilymph), which lies superior to the cochlear duct and abuts the oval window
(b) * the tympanic duct or scala tympani (containing perilymph), which lies inferior to the cochlear duct and terminates at the round window
(c) * the cochlear duct or scala media (containing endolymph) a region of high potassium ion concentration that the stereocilia of the hair cells project into
* Reissner's membrane, which separates the vestibular duct from the cochlear duct
* The osseous spiral lamina, a main structural element that separates the cochlear duct from the tympanic duct
* The spiral ligament.
The cochlea is a portion of the inner ear that looks like a snail shell (cochlea is Greek for snail). The cochlea receives sound in the form of vibrations, which cause the stereocilia to move. The stereocilia then convert these vibrations into nerve impulses which are taken up to the brain to be interpreted. Two of the three fluid sections are canals and the third is the 'Organ of Corti' which detects pressure impulses that travel along the auditory nerve to the brain. The two canals are called the vestibular canal and the tympanic canal.
The lengthwise partition that divides most of the cochlea is itself a fluid-filled tube, the third 'duct'. This central column is called the cochlear duct. Its fluid, endolymph, also contains electrolytes and proteins, but is chemically quite different from perilymph. Whereas the perilymph is rich in sodium ions, the endolymph is rich in potassium ions, which produces an ionic, electrical potential.
The hair cells are arranged in four rows in the Organ of Corti along the entire length of the cochlear coil. Three rows consist of outer hair cells (OHCs) and one row consists of inner hair cells (IHCs). The inner hair cells provide the main neural output of the cochlea. The outer hair cells, instead, mainly 'receive' neural input from the brain, which influences their motility as part of the cochlea's mechanical "pre-amplifier". The input to the OHC is from the olivary body via the medial olivocochlear bundle.
The cochlear duct is almost as complex on its own as the ear itself. The cochlear duct is bounded on three sides by the basilar membrane, the stria vascularis, and Reissner's membrane. The stria vascularis is a rich bed of capillaries and secretory cells; Reissner's membrane is a thin membrane that separates endolymph from perilymph; and the basilar membrane is a mechanically somewhat stiff membrane, supporting the receptor organ for hearing, the Organ of Corti, and determines the mechanical wave propagation properties of the cochlear system.
The stapes (stirrup) ossicle bone of the middle ear transmits vibrations to the fenestra ovalis (oval window) on the outside of the cochlea, which vibrates the perilymph in the vestibular duct (upper chamber of the cochlea). The ossicles are essential for efficient coupling of sound waves into the cochlea, since the cochlea environment is a fluid–membrane system, and it takes more pressure to move sound through fluid–membrane waves than it does through air. A pressure increase is achieved by reducing the area ratio from the tympanic membrane (drum) to the oval window (stapes bone) by 20. As pressure = force/area, results in a pressure gain of about 20 times from the original sound wave pressure in air. This gain is a form of impedance matching – to match the soundwave travelling through air to that travelling in the fluid–membrane system.
The perilymph in the vestibular duct and the endolymph in the cochlear duct act mechanically as a single duct, being kept apart only by the very thin Reissner's membrane. The vibrations of the endolymph in the cochlear duct displace the basilar membrane in a pattern that peaks a distance from the oval window depending upon the soundwave frequency. The Organ of Corti vibrates due to outer hair cells further amplifying these vibrations. Inner hair cells are then displaced by the vibrations in the fluid, and depolarise by an influx of K+ via their tip-link-connected channels, and send their signals via neurotransmitter to the primary auditory neurons of the spiral ganglion.
The hair cells in the Organ of Corti are tuned to certain sound frequencies by way of their location in the cochlea, due to the degree of stiffness in the basilar membrane. This stiffness is due to, among other things, the thickness and width of the basilar membrane, which along the length of the cochlea is stiffest nearest its beginning at the oval window, where the stapes introduces the vibrations coming from the eardrum. Since its stiffness is high there, it allows only high-frequency vibrations to move the basilar membrane, and thus the hair cells. The farther a wave travels towards the cochlea's apex (the helicotrema), the less stiff the basilar membrane is; thus lower frequencies travel down the tube, and the less-stiff membrane is moved most easily by them where the reduced stiffness allows: that is, as the basilar membrane gets less and less stiff, waves slow down and it responds better to lower frequencies. In addition, in mammals, the cochlea is coiled, which has been shown to enhance low-frequency vibrations as they travel through the fluid-filled coil. This spatial arrangement of sound reception is referred to as tonotopy.
Frequencies this low still activate the Organ of Corti to some extent but are too low to elicit the perception of a pitch. Higher frequencies do not propagate to the helicotrema, due to the stiffness-mediated tonotopy.
Hair cell amplification
Not only does the cochlea "receive" sound, a healthy cochlea generates and amplifies sound when necessary. Where the organism needs a mechanism to hear very faint sounds, the cochlea amplifies by the reverse transduction of the OHCs, converting electrical signals back to mechanical in a positive-feedback configuration. The OHCs have a protein motor called prestin on their outer membranes; it generates additional movement that couples back to the fluid–membrane wave. This "active amplifier" is essential in the ear's ability to amplify weak sounds.
Otoacoustic emissions
Role of gap junctions
Gap-junction proteins, called connexins, expressed in the cochlea play an important role in auditory functioning. Mutations in gap-junction genes have been found to cause syndromic and nonsyndromic deafness. Certain connexins, including connexin 30 and connexin 26, are prevalent in the two distinct gap-junction systems found in the cochlea. The epithelial-cell gap-junction network couples non-sensory epithelial cells, while the connective-tissue gap-junction network couples connective-tissue cells. Gap-junction channels recycle potassium ions back to the endolymph after mechanotransduction in hair cells. Importantly, gap junction channels are found between cochlear supporting cells, but not auditory hair cells.
Clinical significance:
Hearing loss
Hearing loss is a partial or total inability to hear. Hearing loss may be present at birth or acquired at any time afterwards. Hearing loss may occur in one or both ears. In children, hearing problems can affect the ability to acquire spoken language, and in adults it can create difficulties with social interaction and at work. Hearing loss can be temporary or permanent. Hearing loss related to age usually affects both ears and is due to cochlear hair cell loss. In some people, particularly older people, hearing loss can result in loneliness. Deaf people usually have little to no hearing.
As of 2013 hearing loss affects about 1.1 billion people to some degree. It causes disability in about 466 million people (5% of the global population), and moderate to severe disability in 124 million people. Of those with moderate to severe disability 108 million live in low and middle income countries. Of those with hearing loss, it began during childhood for 65 million. Those who use sign language and are members of Deaf culture may see themselves as having a difference rather than a disability. Many members of Deaf culture oppose attempts to cure deafness and some within this community view cochlear implants with concern as they have the potential to eliminate their culture. The terms hearing impairment or hearing loss are often viewed negatively as emphasizing what people cannot do, although the terms are still regularly used when referring to deafness in medical contexts.
Other animals
#1238 2021-12-30 00:17:48
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1214) Molecule
Characteristics of molecules
The division of a sample of a substance into progressively smaller parts produces no change in either its composition or its chemical properties until parts consisting of single molecules are reached. Further subdivision of the substance leads to still smaller parts that usually differ from the original substance in composition and always differ from it in chemical properties. In this latter stage of fragmentation the chemical bonds that hold the atoms together in the molecule are broken.
Atoms consist of a single nucleus with a positive charge surrounded by a cloud of negatively charged electrons. When atoms approach one another closely, the electron clouds interact with each other and with the nuclei. If this interaction is such that the total energy of the system is lowered, then the atoms bond together to form a molecule. Thus, from a structural point of view, a molecule consists of an aggregation of atoms held together by valence forces. Diatomic molecules contain two atoms that are chemically bonded. If the two atoms are identical, as in, for example, the oxygen molecule (O2), they compose a homonuclear diatomic molecule, while if the atoms are different, as in the carbon monoxide molecule (CO), they make up a heteronuclear diatomic molecule. Molecules containing more than two atoms are termed polyatomic molecules, e.g., carbon dioxide (CO2) and water (H2O). Polymer molecules may contain many thousands of component atoms.
Molecular bonding
The ratio of the numbers of atoms that can be bonded together to form molecules is fixed; for example, every water molecule contains two atoms of hydrogen and one atom of oxygen. It is this feature that distinguishes chemical compounds from solutions and other mechanical mixtures. Thus hydrogen and oxygen may be present in any arbitrary proportions in mechanical mixtures but when sparked will combine only in definite proportions to form the chemical compound water (H2O). It is possible for the same kinds of atoms to combine in different but definite proportions to form different molecules; for example, two atoms of hydrogen will chemically bond with one atom of oxygen to yield a water molecule, whereas two atoms of hydrogen can chemically bond with two atoms of oxygen to form a molecule of hydrogen peroxide (H2O2). Furthermore, it is possible for atoms to bond together in identical proportions to form different molecules. Such molecules are called isomers and differ only in the arrangement of the atoms within the molecules. For example, ethyl alcohol (CH3CH2OH) and methyl ether (CH3OCH3) both contain one, two, and six atoms of oxygen, carbon, and hydrogen, respectively, but these atoms are bonded in different ways.
Not all substances are made up of distinct molecular units. Sodium chloride (common table salt), for example, consists of sodium ions and chlorine ions arranged in a lattice so that each sodium ion is surrounded by six equidistant chlorine ions and each chlorine ion is surrounded by six equidistant sodium ions. The forces acting between any sodium and any adjacent chlorine ion are equal. Hence, no distinct aggregate identifiable as a molecule of sodium chloride exists. Consequently, in sodium chloride and in all solids of similar type, the concept of the chemical molecule has no significance. Therefore, the formula for such a compound is given as the simplest ratio of the atoms, called a formula unit—in the case of sodium chloride, NaCl.
Molecules are held together by shared electron pairs, or covalent bonds. Such bonds are directional, meaning that the atoms adopt specific positions relative to one another so as to maximize the bond strengths. As a result, each molecule has a definite, fairly rigid structure, or spatial distribution of its atoms. Structural chemistry is concerned with valence, which determines how atoms combine in definite ratios and how this is related to the bond directions and bond lengths. The properties of molecules correlate with their structures; for example, the water molecule is bent structurally and therefore has a dipole moment, whereas the carbon dioxide molecule is linear and has no dipole moment. The elucidation of the manner in which atoms are reorganized in the course of chemical reactions is important. In some molecules the structure may not be rigid; for example, in ethane (H3CCH3) there is virtually free rotation about the carbon-carbon single bond.
Ionic bonding in sodium chloride. An atom of sodium (Na) donates one of its electrons to an atom of chlorine (Cl) in a chemical reaction, and the resulting positive ion (Na+) and negative ion (Cl−) form a stable ionic compound (sodium chloride; common table salt) based on this ionic bond.
The nuclear positions in a molecule are determined either from microwave vibration-rotation spectra or by neutron diffraction. The electron cloud surrounding the nuclei in a molecule can be studied by X-ray diffraction experiments. Further information can be obtained by electron spin resonance or nuclear magnetic resonance techniques. Advances in electron microscopy have enabled visual images of individual molecules and atoms to be produced.
Theoretically the molecular structure is determined by solving the quantum mechanical equation for the motion of the electrons in the field of the nuclei (called the Schrödinger equation). In a molecular structure the bond lengths and bond angles are those for which the molecular energy is the least. The determination of structures by numerical solution of the Schrödinger equation has become a highly developed process entailing use of computers and supercomputers.
Polar and nonpolar molecules
If a molecule has no net electrical charge, its negative charge is equal to its positive charge. The forces experienced by such molecules depend on how the positive and negative charges are arranged in space. If the arrangement is spherically symmetric, the molecule is said to be nonpolar. If there is an excess of positive charge on one end of the molecule and an excess of negative charge on the other, the molecule has a dipole moment (i.e., a measurable tendency to rotate in an electric or magnetic field) and is therefore called polar. When polar molecules are free to rotate, they tend to favour those orientations that lead to attractive forces.
Nonpolar molecules generally are considered lipophilic (lipid-loving), whereas polar chemicals are hydrophilic (water-loving). Lipid-soluble, nonpolar molecules pass readily through a cell membrane because they dissolve in the hydrophobic, nonpolar portion of the lipid bilayer. Although permeable to water (a polar molecule), the nonpolar lipid bilayer of cell membranes is impermeable to many other polar molecules, such as charged ions or those that contain many polar side chains. Polar molecules pass through lipid membranes via specific transport systems.
Molecular weight
The molecular weight of a molecule is the sum of the atomic weights of its component atoms. If a substance has molecular weight M, then M grams of the substance is termed one mole. The number of molecules in one mole is the same for all substances; this number is known as Avogadro’s number (6.022140857 × {10}^{23}). Molecular weights can be determined by mass spectrometry and by techniques based on thermodynamics or kinetic transport phenomena.
In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. This relaxes the requirement that a molecule contains two or more atoms, since the noble gases are individual atoms.
A molecule may be homonuclear, that is, it consists of atoms of one chemical element, e.g. two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, e.g. water (two hydrogen atoms and one oxygen atom; H2O).
Molecules as components of matter are common. They also make up most of the oceans and atmosphere. Most organic substances are molecules. The substances of life are molecules, e.g. proteins, the amino acids of which they are composed, the nucleic acids (DNA and RNA), sugars, carbohydrates, fats, and vitamins. The nutrient minerals are generally ionic compounds, thus they are not molecules, e.g. iron sulfate.
However, the majority of familiar solid substances on Earth are made partly or completely of crystals or ionic compounds, which are not made of molecules. These include all of the minerals that make up the substance of the Earth, sand, clay, pebbles, rocks, boulders, bedrock, the molten interior, and the core of the Earth. All of these contain many chemical bonds, but are not made of identifiable molecules.
Molecular science
History and etymology
Molecules are generally held together by covalent bonding. Several non-metallic elements exist only as molecules in the environment either in compounds or as homonuclear molecules, not as free atoms: for example, hydrogen.
While some people say a metallic crystal can be considered a single giant molecule held together by metallic bonding, others point out that metals behave very differently than molecules.
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding.
Molecular size
Effective molecular radius is the size a molecule displays in solution.
Molecular formulas:
Chemical formula types
Structural formula
Molecular geometry
Molecular spectroscopy
Theoretical aspects
#1239 2021-12-30 22:10:45
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1215) Sulfuric Acid
Sulfuric acid (American spelling and the preferred IUPAC name) or sulphuric acid (Commonwealth spelling), known in antiquity as oil of vitriol, is a mineral acid composed of the elements sulfur, oxygen and hydrogen, with the molecular formula H2SO4. It is a colorless, odorless and viscous liquid that is miscible with water.
Pure sulfuric acid does not exist naturally on Earth due to its strong affinity to water vapor; for this reason, it is hygroscopic and readily absorbs water vapor from the air. Concentrated sulfuric acid is highly corrosive towards other materials, from rocks to metals, since it is an oxidant with powerful dehydrating properties. Phosphorus pentoxide is a notable exception of not being affected by the acid's dehydrating property, which reversely dehydrates sulfuric acid to sulfur trioxide. Upon addition of sulfuric acid to water, a considerable amount of heat is released; thus the reverse procedure of adding water to the acid should not be performed since the heat released may boil the solution, spraying droplets of hot acid during the process. Upon contact with body tissue, sulfuric acid can cause severe acidic chemical burns and even secondary thermal burns due to dehydration. Dilute sulfuric acid is substantially less hazardous without the oxidative and dehydrating properties; however, it should still be handled with care for its acidity.
Sulfuric acid is a very important commodity chemical, and a nation's sulfuric acid production is a good indicator of its industrial strength. It is widely produced with different methods, such as contact process, wet sulfuric acid process, lead chamber process and some other methods. Sulfuric acid is also a key substance in the chemical industry. It is most commonly used in fertilizer manufacture,[ but is also important in mineral processing, oil refining, wastewater processing, and chemical synthesis. It has a wide range of end applications including in domestic acidic drain cleaners, as an electrolyte in lead-acid batteries, in dehydrating a compound, and in various cleaning agents. Sulfuric acid can be obtained by dissolving sulfur trioxide in water.
Summary : II
Sulfuric acid, sulfuric also spelled sulphuric (H2SO4), also called oil of vitriol, or hydrogen sulfate, dense, colourless, oily, corrosive liquid; it s one of the most commercially important of all chemicals. Sulfuric acid is prepared industrially by the reaction of water with sulfur trioxide, which in turn is made by chemical combination of sulfur dioxide and oxygen either by the contact process or the chamber process. In various concentrations the acid is used in the manufacture of fertilizers, pigments, dyes, drugs, explosives, detergents, and inorganic salts and acids, as well as in petroleum refining and metallurgical processes. In one of its most familiar applications, sulfuric acid serves as the electrolyte in lead–acid storage batteries.
Due to its affinity for water, pure anhydrous sulfuric acid does not exist in nature. Volcanic activity can result in the production of sulfuric acid, depending on the emissions associated with specific volcanoes, and sulfuric acid aerosols from an eruption can persist in the stratosphere for many years. These aerosols can then reform into sulfur dioxide (SO2), a constituent of acid rain, though volcanic activity is a relatively minor contributor to acid rainfall.
Sulfuric acid is a very strong acid; in aqueous solutions it ionizes completely to form hydronium ions and hydrogen sulfate ions. In dilute solutions the hydrogen sulfate ions also dissociate, forming more hydronium ions and sulfate ions. In addition to being an oxidizing agent, reacting readily at high temperatures with many metals, carbon, sulfur, and other substances, concentrated sulfuric acid is also a strong dehydrating agent, combining violently with water; in this capacity, it chars many organic materials, such as wood, paper, or sugar, leaving a carbonaceous residue.
#1240 2022-01-01 01:33:27
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1216) Nitric Acid
The pure compound is colorless, but older samples tend to acquire a yellow cast due to decomposition into oxides of nitrogen and water. Most commercially available nitric acid has a concentration of 68% in water. When the solution contains more than 86% HNO3, it is referred to as fuming nitric acid. Depending on the amount of nitrogen dioxide present, fuming nitric acid is further characterized as red fuming nitric acid at concentrations above 86%, or white fuming nitric acid at concentrations above 95%.
Summary II
Nitric acid, (HNO3), colourless, fuming, and highly corrosive liquid (freezing point −42 °C [−44 °F], boiling point 83 °C [181 °F]) that is a common laboratory reagent and an important industrial chemical for the manufacture of fertilizers and explosives. It is toxic and can cause severe burns.
Nitric acid decomposes into water, nitrogen dioxide, and oxygen, forming a brownish yellow solution. It is a strong acid, completely ionized into hydronium and nitrate ions in aqueous solution, and a powerful oxidizing agent (one that acts as electron acceptor in oxidation-reduction reactions). Among the many important reactions of nitric acid are: neutralization with ammonia to form ammonium nitrate, a major component of fertilizers; nitration of glycerol and toluene, forming the explosives nitroglycerin and trinitrotoluene (TNT), respectively; preparation of nitrocellulose; and oxidation of metals to the corresponding oxides or nitrates.
#1241 2022-01-02 00:01:17
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1217) Hydrochloric acid
Hydrochloric acid is the water-based, or aqueous, solution of hydrogen chloride gas. It is also the main component of gastric acid, an acid produced naturally in the human stomach to help digest food. Hydrochloric acid is also synthetically produced for a variety of industrial and commercial applications, and can be formed by a number of manufacturing processes, including dissolving hydrogen chloride gas in water.
Uses & Benefits
Hydrochloric acid is also used to make many other chemicals and as a disinfectant and slimicide, a chemical that prevents the growth of slime in paper stock.
Other common end uses for hydrochloric acid include household cleaners, pool maintenance and food manufacturing.
Steel Production
Hydrochloric acid is used in pickling operations to remove rust and other impurities from carbon, alloy and stainless steel, to prepare the steel for final applications in building and construction projects, and in products such as car bodies and household appliances. It is also used in aluminum etching and metal cleaning applications.
Household Cleaners
Hydrochloric acid can be an ingredient in household cleaners such as toilet bowl cleaners, bathroom tile cleaners and other porcelain cleaners, due to its corrosive properties that help clean tough stains.
Pool Sanitation
Hydrochloric acid is used as a swimming pool treatment chemical, to help maintain an optimal pH in the water.
Food Production and Processing
The food industry uses hydrochloric acid to process a variety of food products, such as corn syrups used in soft drinks, cookies, crackers, ketchup and cereals. Hydrochloric acid is also used as an acidifier in sauces, vegetable juices and canned goods, to help enhance flavor and reduce spoilage.
Calcium Chloride Production
When hydrochloric acid is mixed or reacted with limestone, it produces calcium chloride, a type of salt used to de-ice roads. Calcium chloride also has uses in food production as a stabilizer and firming agent, for example in baked goods, as well as uses as an antimicrobial.
Additional Uses
Hydrochloric acid is used in the production of batteries, photoflash bulbs and fireworks. It is also used in leather processing, building and construction, oil well acidizing and producing gelatin products.
Safety Information
Hydrochloric acid in its concentrated, liquid form has a strong irritating odor and is very corrosive. It can cause damage, such as chemical burns, upon contact, according to the U.S. National Library of Medicine. The U.S. Centers for Disease Control and Prevention (CDC) notes that hydrochloric acid can cause eye damage, even blindness, if splashed in the eyes.
Ingestion of concentrated hydrochloric acid can cause severe injury to the mouth, throat, esophagus and stomach. Personal protective equipment (PPE) such as vapor respirators, rubber gloves, splash goggles and face shields should be used when handling hydrochloric acid. If used in the workplace, it is recommended that an eye flush station be available in case of accidental exposure.
When using pool cleaners that contain hydrochloric acid (also known as muriatic acid), it is important to follow directions on the product label for safe handling. The CDC has developed two posters with recommendations for pool chemical safety handling as well as storage of pool chemicals for pool owners and operators.
Storing Hydrochloric Acid
Metal containers are not suitable storage containers for hydrochloric acid due to its corrosive nature. Plastic containers, such as those made of PVC, can typically be used to store hydrochloric acid.
Hydrochloric acid, also known as muriatic acid, is an aqueous solution of hydrogen chloride (chemical formula: HCl). It is a colorless solution with a distinctive pungent smell. It is classified as a strong acid. It is a component of the gastric acid in the digestive systems of most animal species, including humans. Hydrochloric acid is an important laboratory reagent and industrial chemical.
Physical properties
Hydrochloric acid is usually prepared industrially by dissolving hydrogen chloride in water. Hydrogen chloride can be generated in many ways, and thus several precursors to hydrochloric acid exist. The large-scale production of hydrochloric acid is almost always integrated with the industrial scale production of other chemicals, such as in the chloralkali process which produces hydroxide, hydrogen, and chlorine, the latter of which can be combined to produce HCl.
Industrial market
Major producers worldwide include Dow Chemical at 2 million tonnes annually (Mt/year), calculated as HCl gas, Georgia Gulf Corporation, Tosoh Corporation, Akzo Nobel, and Tessenderlo at 0.5 to 1.5 Mt/year each. Total world production, for comparison purposes expressed as HCl, is estimated at 20 Mt/year, with 3 Mt/year from direct synthesis, and the rest as secondary product from organic and similar syntheses. By far, most hydrochloric acid is consumed captively by the producer. The open world market size is estimated at 5 Mt/year.
Hydrochloric acid is a strong inorganic acid that is used in many industrial processes such as refining metal. The application often determines the required product quality. Hydrogen chloride, not hydrochloric acid, is used more widely in industrial organic chemistry, e.g. for vinyl chloride and dichloroethane.
Laboratory use
Of the six common strong mineral acids in chemistry, hydrochloric acid is the monoprotic acid least likely to undergo an interfering oxidation-reduction reaction. It is one of the least hazardous strong acids to handle; despite its acidity, it contains the non-reactive and non-toxic chloride ion. Intermediate-strength hydrochloric acid solutions are quite stable upon storage, maintaining their concentrations over time. These attributes, plus the fact that it is available as a pure reagent, make hydrochloric acid an excellent acidifying reagent. It is also inexpensive.
Hydrochloric acid has been used for dissolving calcium carbonate, e.g. such things as de-scaling kettles and for cleaning mortar off brickwork.
Presence in living organisms
Gastric acid is one of the main secretions of the stomach. It consists mainly of hydrochloric acid and acidifies the stomach content to a pH of 1 to 2. Chloride and hydrogen ions are secreted separately in the stomach fundus region at the top of the stomach by parietal cells of the gastric mucosa into a secretory network called canaliculi before it enters the stomach lumen.
Being a strong acid, hydrochloric acid is corrosive to living tissue and to many materials, but not to rubber. Typically, rubber protective gloves and related protective gear are used when handling concentrated solutions.
#1242 2022-01-03 01:05:47
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1218) Vertebral column
The vertebral column, also known as the backbone or spine, is part of the axial skeleton. The vertebral column is the defining characteristic of a vertebrate in which the notochord (a flexible rod of uniform composition) found in all chordates has been replaced by a segmented series of bone: vertebrae separated by intervertebral discs. The vertebral column houses the spinal canal, a cavity that encloses and protects the spinal cord.
There are about 50,000 species of animals that have a vertebral column. The human vertebral column is one of the most-studied examples.
In humans the structure and function of the vertebral column can be affected by certain diseases, disorders, or injuries. Examples include scoliosis, lordosis, and kyphosis, which are deviations from the normal spinal curvature; degenerative diseases, such as osteoarthritis and Baastrup disease (kissing spine syndrome); and tuberculosis of the spine (Pott disease), which is caused by infection of the vertebral column by Mycobacterium tuberculosis.
The vertebral column is a series of approximately 33 bones called vertebrae, which are separated by intervertebral discs.
The column can be divided into five different regions, with each region characterised by a different vertebral structure.
In this article, we shall look at the anatomy of the vertebral column – its function, structure, and clinical significance.
The vertebral column has four main functions:
* Protection – encloses and protects the spinal cord within the spinal canal.
* Support – carries the weight of the body above the pelvis.
* Axis – forms the central axis of the body.
* Movement – has roles in both posture and movement.
Structure of a Vertebrae
All vertebrae share a basic common structure. They each consist of an anterior vertebral body, and a posterior vertebral arch.
Vertebral Body
The vertebral body forms the anterior part of each vertebrae.
It is the weight-bearing component, and vertebrae in the lower portion of the column have larger bodies than those in the upper portion (to better support the increased weight).
The superior and inferior aspects of the vertebral body are lined with hyaline cartilage. Adjacent vertebral bodies are separated by a fibrocartilaginous intervertebral disc.
Vertebral Arch
The vertebral arch forms the lateral and posterior aspect of each vertebrae.
In combination with the vertebral body, the vertebral arch forms an enclosed hole – the vertebral foramen. The foramina of all the vertebrae line up to form the vertebral canal, which encloses the spinal cord.
The vertebral arches have several bony prominences, which act as attachment sites for muscles and ligaments:
* Spinous processes – each vertebra has a single spinous process, centred posteriorly at the point of the arch.
* Transverse processes – each vertebra has two transverse processes, which extend laterally and posteriorly from the vertebral body. In the thoracic vertebrae, the transverse processes articulate with the ribs.
* Pedicles – connect the vertebral body to the transverse processes.
* Lamina – connect the transverse and spinous processes.
* Articular processes – form joints between one vertebra and its superior and inferior counterparts. The articular processes are located at the intersection of the laminae and pedicles.
Classifications of Vertebrae
Cervical Vertebrae
* Bifid spinous process – the spinous process bifurcates at its distal end.
* Exceptions to this are C1 (no spinous process) and C7 (spinous process is longer than that of C2-C6 and may not bifurcate).
* Transverse foramina – an opening in each transverse process, through which the vertebral arteries travel to the brain.
* Triangular vertebral foramen
Two cervical vertebrae that are unique. C1 and C2 (called the atlas and axis respectively), are specialised to allow for the movement of the head.
Thoracic Vertebrae
The twelve thoracic vertebrae are medium-sized, and increase in size from superior to inferior. Their specialised function is to articulate with ribs, producing the bony thorax.
Each thoracic vertebra has two ‘demi facets,’ superiorly and inferiorly placed on either side of its vertebral body. The demi facets articulate with the heads of two different ribs.
On the transverse processes of the thoracic vertebrae, there is a costal facet for articulation with the shaft of a single rib. For example, the head of Rib 2 articulates with the inferior demi facet of thoracic vertebra 1 (T1) and the superior demi facet of T2, while the shaft of Rib 2 articulates with the costal facets of T2.
The spinous processes of thoracic vertebrae are oriented obliquely inferiorly and posteriorly. In contrast to the cervical vertebrae, the vertebral foramen of thoracic vertebrae is circular.
Lumbar Vertebrae
There are five lumbar vertebrae in most humans, which are the largest in the vertebral column. They are structurally specialised to support the weight of the torso.
Lumbar vertebrae have very large vertebral bodies, which are kidney shaped. They lack the characteristic features of other vertebrae, with no transverse foramina, costal facets, or bifid spinous processes.
However, like the cervical vertebrae, they have a triangular-shaped vertebral foramen. Their spinous processes are shorter than those of thoracic vertebrae and do not extend inferiorly below the level of the vertebral body.
Their size and orientation permits needle access to the spinal canal and spinal cord (which would not be possible between thoracic vertebrae). Examples include epidural anaesthesia administration and lumbar puncture.
Sacrum and Coccyx
The sacrum is a collection of five fused vertebrae. It is described as an inverted triangle, with the apex pointing inferiorly. On the lateral walls of the sacrum are facets for articulation with the pelvis at the sacroiliac joints.
The coccyx is a small bone which articulates with the apex of the sacrum. It is recognised by its lack of vertebral arches. Due to the lack of vertebral arches, there is no vertebral canal.
Separation of S1 from the sacrum is termed “lumbarisation”, while fusion of L5 to the sacrum is termed “sacralisation”. These conditions are congenital abnormalities.
Joints and Ligaments
The mobile vertebrae articulate with each other via joints between their bodies and articular facets:
* Left and right superior articular facets articulate with the vertebra above.
* Left and right inferior articular facets articulate with the vertebra below.
* Vertebral bodies indirectly articulate with each other via the intervertebral discs.
The vertebral body joints are cartilaginous joints, designed for weight-bearing. The articular surfaces are covered by hyaline cartilage, and are connected by the intervertebral disc.
Two ligaments strengthen the vertebral body joints: the anterior and posterior longitudinal ligaments, which run the full length of the vertebral column. The anterior longitudinal ligament is thick and prevents hyperextension of the vertebral column. The posterior longitudinal ligament is weaker and prevents hyperflexion.
The joints between the articular facets, called facet joints, allow for some gliding motions between the vertebrae. They are strengthened by several ligaments:
* Ligamentum flavum – extends between lamina of adjacent vertebrae.
* Interspinous and supraspinous – join the spinous processes of adjacent vertebrae. The interspinous ligaments attach between processes, and the supraspinous ligaments attach to the tips.
* Intertransverse ligaments – extends between transverse processes.
#1243 2022-01-04 01:01:06
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1219) Carbonic acid
In chemistry, carbonic acid is a dibasic acid with the chemical formula H2CO3. The pure compound decomposes at temperatures greater than ca. −80 °C.
Carbonic acid, (H2CO3) is a compound of the elements hydrogen, carbon, and oxygen. It is formed in small amounts when its anhydride, carbon dioxide (CO2), dissolves in water.
Carbonic acid plays a role in the assembly of caves and cave formations like stalactites and stalagmites. The largest and most common caves are those formed by dissolution of limestone or dolomite by the action of water rich in carbonic acid derived from recent rainfall. The calcite in stalactites and stalagmites is derived from the overlying limestone near the bedrock/soil interface. Rainwater infiltrating through the soil absorbs carbon dioxide from the carbon dioxide-rich soil and forms a dilute solution of carbonic acid. When this acid water reaches the base of the soil, it reacts with the calcite in the limestone bedrock and takes some of it into solution. The water continues its downward course through narrow joints and fractures in the unsaturated zone with little further chemical reaction. When the water emerges from the cave roof, carbon dioxide is lost into the cave atmosphere, and some of the calcium carbonate is precipitated. The infiltrating water acts as a calcite pump, removing it from the top of the bedrock and redepositing it in the cave below.
Carbonic acid is important in the transport of carbon dioxide in the blood. Carbon dioxide enters blood in the tissues because its local partial pressure is greater than its partial pressure in blood flowing through the tissues. As carbon dioxide enters the blood, it combines with water to form carbonic acid, which dissociates into hydrogen ions and bicarbonate ions. Blood acidity is minimally affected by the released hydrogen ions because blood proteins, especially hemoglobin, are effective buffering agents. (A buffer solution resists change in acidity by combining with added hydrogen ions and, essentially, inactivating them.) The natural conversion of carbon dioxide to carbonic acid is a relatively slow process; however, carbonic anhydrase, a protein enzyme present inside the red blood cell, catalyzes this reaction with sufficient rapidity that it is accomplished in only a fraction of a second. Because the enzyme is present only inside the red blood cell, bicarbonate accumulates to a much greater extent within the red cell than in the plasma. The capacity of blood to carry carbon dioxide as bicarbonate is enhanced by an ion transport system inside the red blood cell membrane that simultaneously moves a bicarbonate ion out of the cell and into the plasma in exchange for a chloride ion. The simultaneous exchange of these two ions, known as the chloride shift, permits the plasma to be used as a storage site for bicarbonate without changing the electrical charge of either the plasma or the red blood cell. Only 26 percent of the total carbon dioxide content of blood exists as bicarbonate inside the red blood cell, while 62 percent exists as bicarbonate in plasma; however, the bulk of bicarbonate ions is first produced inside the cell, then transported to the plasma. A reverse sequence of reactions occurs when blood reaches the lung, where the partial pressure of carbon dioxide is lower than in the blood.
Additional information
Physical Properties
* Appearance: Grayish white solid.
* Melting point: 210 Celsius, boiling point: -78 degree Celsius.
* Molecular weight: 62.024 g/mol.
* Solubility: Insoluble.
* Carbonic acid has a pH value of less than 7.
* Carbonic acid is odorless and has the alkaline taste.
Chemical Properties
* Carbonic acid is weak and unstable dibasic acid.
* It has an acidity of 6.3 pK.
Carbonic Acid Uses:
For hydrolysis of starch also carbonic acid is used.
#1244 2022-01-05 00:02:47
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1220) DDT
DDT, abbreviation of dichlorodiphenyltrichloroethane, also called 1,1,1-trichloro-2,2-bis(p-chlorophenyl)ethane, is a synthetic insecticide belonging to the family of organic halogen compounds, highly toxic toward a wide variety of insects as a contact poison that apparently exerts its effect by disorganizing the nervous system.
DDT, prepared by the reaction of chloral with chlorobenzene in the presence of sulfuric acid, was first made in 1874; its insecticidal properties were discovered in 1939 by a Swiss chemist, Paul Hermann Müller. During and after World War II, DDT was found to be effective against lice, fleas, and mosquitoes (the carriers of typhus, of plague, and of malaria and yellow fever, respectively) as well as the Colorado potato beetle, the gypsy moth, and other insects that attack valuable crops.
Many species of insects rapidly develop populations resistant to DDT; the high stability of the compound leads to its accumulation in insects that constitute the diet of other animals, with toxic effects on them, especially certain birds and fishes. These two disadvantages had severely decreased the value of DDT as an insecticide by the 1960s, and severe restrictions were imposed on its use in the United States in 1972.
Pure DDT is a colourless, crystalline solid that melts at 109° C (228° F); the commercial product, which is usually 65 to 80 percent active compound, along with related substances, is an amorphous powder that has a lower melting point. DDT is applied as a dust or by spraying its aqueous suspension.
Dichlorodiphenyltrichloroethane, commonly known as DDT, is a colorless, tasteless, and almost odorless crystalline chemical compound, an organochloride. Originally developed as an insecticide, it became infamous for its environmental impacts. DDT was first synthesized in 1874 by the Austrian chemist Othmar Zeidler. DDT's insecticidal action was discovered by the Swiss chemist Paul Hermann Müller in 1939. DDT was used in the second half of World War II to limit the spread of the insect-born diseases malaria and typhus among civilians and troops. Müller was awarded the Nobel Prize in Physiology or Medicine in 1948 "for his discovery of the high efficiency of DDT as a contact poison against several arthropods".
By October 1945, DDT was available for public sale in the United States. Although it was promoted by government and industry for use as an agricultural and household pesticide, there were also concerns about its use from the beginning. Opposition to DDT was focused by the 1962 publication of Rachel Carson's book Silent Spring. It talked about environmental impacts that correlated with the widespread use of DDT in agriculture in the United States, and it questioned the logic of broadcasting potentially dangerous chemicals into the environment with little prior investigation of their environmental and health effects. The book cited claims that DDT and other pesticides caused cancer and that their agricultural use was a threat to wildlife, particularly birds. Although Carson never directly called for an outright ban on the use of DDT, its publication was a seminal event for the environmental movement and resulted in a large public outcry that eventually led, in 1972, to a ban on DDT's agricultural use in the United States.
A worldwide ban on agricultural use was formalized under the Stockholm Convention on Persistent Organic Pollutants which has been in effect since 2004. DDT still has limited use in disease vector control because of its effectiveness in killing mosquitos and thus reducing malarial infections, but that use is controversial due to environmental and health concerns.
Along with the passage of the Endangered Species Act, the United States ban on DDT is a major factor in the comeback of the bald eagle (the national bird of the United States) and the peregrine falcon from near-extinction in the contiguous United States.
Properties and chemistry
DDT is similar in structure to the insecticide methoxychlor and the acaricide dicofol. It is highly hydrophobic and nearly insoluble in water but has good solubility in most organic solvents, fats and oils. DDT does not occur naturally and is synthesised by consecutive Friedel–Crafts reactions between chloral (CCl3CHO) and two equivalents of chlorobenzene (C6H5Cl), in the presence of an acidic catalyst. DDT has been marketed under trade names including Anofex, Cezarex, Chlorophenothane, Dicophane, Dinocide, Gesarol, Guesapon, Guesarol, Gyron, Ixodex, Neocid, Neocidol and Zerdane; INN is clofenotane.
Isomers and related compounds
Commercial DDT is a mixture of several closely–related compounds. Due to the nature of the chemical reaction used to synthesize DDT, several combinations of ortho and para arene substitution patterns are formed. The major component (77%) is the desired p,p' isomer. The o,p' isomeric impurity is also present in significant amounts (15%). Dichlorodiphenyldichloroethylene (DDE) and dichlorodiphenyldichloroethane (DDD) make up the balance of impurities in commercial samples. DDE and DDD are also the major metabolites and environmental breakdown products. DDT, DDE and DDD are sometimes referred to collectively as DDX.
#1245 2022-01-06 00:13:23
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1221) Cotton
Cotton is a soft, fluffy staple fiber that grows in a boll, or protective case, around the seeds of the cotton plants of the genus Gossypium in the mallow family Malvaceae. The fiber is almost pure cellulose, and can contain minor percentages of waxes, fats, pectins, and water. Under natural conditions, the cotton bolls will increase the dispersal of the seeds.
Current estimates for world production are about 25 million tonnes or 110 million bales annually, accounting for 2.5% of the world's arable land. India is the world's largest producer of cotton. The United States has been the largest exporter for many years.
Cotton is seed-hair fibre of several species of plants of the genus Gossypium, belonging to the hibiscus, or mallow, family (Malvaceae).
Cotton, one of the world’s leading agricultural crops, is plentiful and economically produced, making cotton products relatively inexpensive. The fibres can be made into a wide variety of fabrics ranging from lightweight voiles and laces to heavy sailcloths and thick-piled velveteens, suitable for a great variety of wearing apparel, home furnishings, and industrial uses. Cotton fabrics can be extremely durable and resistant to abrasion. Cotton accepts many dyes, is usually washable, and can be ironed at relatively high temperatures. It is comfortable to wear because it absorbs and releases moisture quickly. When warmth is desired, it can be napped, a process giving the fabric a downy surface. Various finishing processes have been developed to make cotton resistant to stains, water, and mildew; to increase resistance to wrinkling, thus reducing or eliminating the need for ironing; and to reduce shrinkage in laundering to not more than 1 percent. Nonwoven cotton, made by fusing or bonding the fibres together, is useful for making disposable products to be used as towels, polishing cloths, tea bags, tablecloths, bandages, and disposable uniforms and sheets for hospital and other medical uses.
Cotton fibre processing
Cotton fibres may be classified roughly into three large groups, based on staple length (average length of the fibres making up a sample or bale of cotton) and appearance. The first group includes the fine, lustrous fibres with staple length ranging from about 2.5 to 6.5 cm (about 1 to 2.5 inches) and includes types of the highest quality—such as Sea Island, Egyptian, and pima cottons. Least plentiful and most difficult to grow, long-staple cottons are costly and are used mainly for fine fabrics, yarns, and hosiery. The second group contains the standard medium-staple cotton, such as American Upland, with staple length from about 1.3 to 3.3 cm (0.5 to 1.3 inches). The third group includes the short-staple, coarse cottons, ranging from about 1 to 2.5 cm (0.5 to 1 inch) in length, used to make carpets and blankets, coarse and inexpensive fabrics, and blends with other fibres.
Most of the seeds (cottonseed) are separated from the fibres by a mechanical process called ginning. Ginned cotton is shipped in bales to a textile mill for yarn manufacturing. A traditional and still common processing method is ring spinning, by which the mass of cotton may be subjected to opening and cleaning, picking, carding, combing, drawing, roving, and spinning. The cotton bale is opened, and its fibres are raked mechanically to remove foreign matter (e.g., soil and seeds). A picker (picking machine) then wraps the fibres into a lap. A card (carding) machine brushes the loose fibres into rows that are joined as a soft sheet, or web, and forms them into loose untwisted rope known as card sliver. For higher-quality yarn, card sliver is put through a combing machine, which straightens the staple further and removes unwanted short lengths, or noils. In the drawing (drafting) stage, a series of variable-speed rollers attenuates and reduces the sliver to firm uniform strands of usable size. Thinner strands are produced by the roving (slubbing) process, in which the sliver is converted to roving by being pulled and slightly twisted. Finally, the roving is transferred to a spinning frame, where it is drawn further, twisted on a ring spinner, and wound on a bobbin as yarn.
Faster production methods include rotor spinning (a type of open-end spinning), in which fibres are detached from the card sliver and twisted, within a rotor, as they are joined to the end of the yarn. For the production of cotton blends, air-jet spinning may be used; in this high-speed method, air currents wrap loose fibres around a straight sliver core. Blends (composites) are made during yarn processing by joining drawn cotton with other staple fibres, such as polyester or casein.
The procedure for weaving cotton yarn into fabric is similar to that for other fibres. Cotton looms interlace the tense lengthwise yarns, called warp, with crosswise yarns called weft, or filling. Warp yarns often are treated chemically to prevent breaking during weaving.
Cultivation of the cotton plant
Pests and diseases
Cotton is attacked by several hundred species of insects, including such harmful species as the boll weevil, pink bollworm, cotton leafworm, cotton fleahopper, cotton aphid, rapid plant bug, conchuela, southern green stinkbug, spider mites (red spiders), grasshoppers, thrips, and tarnished plant bugs. Limited control of damage by insect pests can be achieved by proper timing of planting and other cultural practices or by selective breeding of varieties having some resistance to insect damage. Chemical insecticides, which were first introduced in the early 1900s, require careful and selective use because of ecological considerations but appear to be the most effective and efficient means of control. Conventional cotton production requires more insecticides than any other major crop, and the production of organic cotton, which relies on nonsynthetic insecticides, has been increasing in many places worldwide. Additionally, genetically modified “Bt cotton” was developed to produce bacterial proteins that are toxic to herbivorous insects, ostensibly reducing the amount of pesticides needed. Glyphosate-resistant cotton, which can tolerate the herbicide glyphosate, was also developed through genetic engineering.
#1246 2022-01-07 00:55:02
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1222) Biopsy
What Is a Biopsy?
Why Are Biopsies Done?
* A mammogram shows a lump or mass, indicating the possibility of breast cancer.
* A mole on the skin has changed shape recently and melanoma is possible.
* A person has chronic hepatitis and it's important to know if cirrhosis is present.
Types of Biopsies
Here are some types of biopsies:
What to Expect From Your Biopsy
What Happens After the Biopsy?
#1247 2022-01-08 00:08:47
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1223) Taproot
Taproot is a main root of a primary root system, growing vertically downward. Most dicotyledonous plants (see cotyledon), such as dandelions, produce taproots, and some, such as the edible roots of carrots and beets, are specialized for food storage.
Upon germination, the first structure to emerge from most seeds is the root from the embryonic radicle. This primary root is a taproot. In plants in which the taproot persists, smaller lateral roots (secondary roots) commonly arise from the taproot and may in turn produce even smaller lateral roots (tertiary roots). This serves to increase the surface area for water and mineral absorption. In other plants, the initial taproot is quickly modified into a fibrous, or diffuse, system, in which the initial secondary roots soon equal or exceed the primary root in size and there is no well-defined single taproot. Fibrous root systems are generally shallower than taproot systems.
A taproot is a large, central, and dominant root from which other roots sprout laterally. Typically a taproot is somewhat straight and very thick, is tapering in shape, and grows directly downward. In some plants, such as the carrot, the taproot is a storage organ so well developed that it has been cultivated as a vegetable.
* Beetroot
* Burdock
* Carrot
* Sugar beet
* Dandelion
* Parsley
* Parsnip
* Poppy mallow
* Radish
* Sagebrush
* Turnip
* Common milkweed
* trees such as oaks, elms, pines and firs
Development of taproots
Horticultural considerations
#1248 2022-01-08 20:45:56
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1224) Oxalic Acid
Oxalic acid is an organic acid with the IUPAC name ethanedioic acid and formula HO2C−CO2H. It is the simplest dicarboxylic acid. It is a white crystalline solid that forms a colorless solution in water. Its name comes from the fact that early investigators isolated oxalic acid from flowering plants of the genus Oxalis, commonly known as wood-sorrels. It occurs naturally in many foods, but excessive ingestion of oxalic acid or prolonged skin contact can be dangerous.
Oxalic acid has much greater acid strength than acetic acid. It is a reducing agent and its conjugate base, known as oxalate, is a chelating agent for metal cations. Typically, oxalic acid occurs as the dihydrate with the formula C2H2O4·2H2O.
The preparation of salts of oxalic acid (crab acid) from plants had been known, at least since 1745, when the Dutch botanist and physician Herman Boerhaave isolated a salt from wood sorrel. By 1773, François Pierre Savary of Fribourg, Switzerland had isolated oxalic acid from its salt in sorrel.
In 1776, Swedish chemists Carl Wilhelm Scheele and Torbern Olof Bergman produced oxalic acid by reacting sugar with concentrated nitric acid; Scheele called the acid that resulted socker-syra or såcker-syra (sugar acid). By 1784, Scheele had shown that "sugar acid" and oxalic acid from natural sources were identical.
In 1824, the German chemist Friedrich Wöhler obtained oxalic acid by reacting cyanogen with ammonia in aqueous solution. This experiment may represent the first synthesis of a natural product.
Oxalic acid is mainly manufactured by the oxidation of carbohydrates or glucose using nitric acid or air in the presence of vanadium pentoxide. A variety of precursors can be used including glycolic acid and ethylene glycol.
Historically oxalic acid was obtained exclusively by using caustics, such as sodium or potassium hydroxide, on sawdust. Pyrolysis of sodium formate (ultimately prepared from carbon monoxide), leads to the formation of sodium oxalate, easily converted to oxalic acid.
Laboratory methods
Although it can be readily purchased, oxalic acid can be prepared in the laboratory by oxidizing sucrose using nitric acid in the presence of a small amount of vanadium pentoxide as a catalyst.
The hydrated solid can be dehydrated with heat or by azeotropic distillation.
Developed in the Netherlands, an electrocatalysis by a copper complex helps reduce carbon dioxide to oxalic acid; this conversion uses carbon dioxide as a feedstock to generate oxalic acid.
Occurrence in foods and plants
Early investigators isolated oxalic acid from wood-sorrel (Oxalis). Members of the spinach family and the brassicas (cabbage, broccoli, brussels sprouts) are high in oxalates, as are sorrel and umbellifers like parsley. Rhubarb leaves contain about 0.5% oxalic acid, and jack-in-the-pulpit (Arisaema triphyllum) contains calcium oxalate crystals. Similarly, the Virginia creeper, a common decorative vine, produces oxalic acid in its berries as well as oxalate crystals in the sap, in the form of raphides. Bacteria produce oxalates from oxidation of carbohydrates.
Plants of the genus Fenestraria produce optical fibers made from crystalline oxalic acid to transmit light to subterranean photosynthetic sites.
Carambola, also known as starfruit, also contains oxalic acid along with caramboxin. Citrus juice contains small amounts of oxalic acid. Citrus fruits produced in organic agriculture contain less oxalic acid than those produced in conventional agriculture.
The formation of naturally occurring calcium oxalate patinas on certain limestone and marble statues and monuments has been proposed to be caused by the chemical reaction of the carbonate stone with oxalic acid secreted by lichen or other microorganisms.
Production by fungi
Many soil fungus species secrete oxalic acid, resulting in greater solubility of metal cations, increased availability of certain soil nutrients, and can lead to the formation of calcium oxalate crystals. Some fungi such as Aspergillus niger have been extensively studied for the industrial production of oxalic acid; however, those processes are not yet economically competitive with production from oil and gas.
The conjugate base of oxalic acid is the hydrogenoxalate anion, and its conjugate base (oxalate) is a competitive inhibitor of the lactate dehydrogenase (LDH) enzyme. LDH catalyses the conversion of pyruvate to lactic acid (end product of the fermentation (anaerobic) process) oxidising the coenzyme NADH to NAD+ and H+ concurrently. Restoring NAD+ levels is essential to the continuation of anaerobic energy metabolism through glycolysis. As cancer cells preferentially use anaerobic metabolism inhibition of LDH has been shown to inhibit tumor formation and growth, thus is an interesting potential course of cancer treatment.
Oxalic acid's main applications include cleaning or bleaching, especially for the removal of rust (iron complexing agent). Its utility in rust removal agents is due to its forming a stable, water-soluble salt with ferric iron, ferrioxalate ion. The cleaning product Zud contains oxalic acid. Oxalic acid is an ingredient in some tooth whitening products. About 25% of produced oxalic acid will be used as a mordant in dyeing processes. It is also used in bleaches, especially for pulpwood, and for rust removal and other cleaning, in baking powder, and as a third reagent in silica analysis instruments.
Niche uses
Dilute solutions (0.05–0.15 M) of oxalic acid can be used to remove iron from clays such as kaolinite to produce light-colored ceramics.
Oxalic acid is used to clean minerals.
Oxalic acid is also widely used as a wood bleach, most often in its crystalline form to be mixed with water to its proper dilution for use.
Oxalic acid has an oral LDLo (lowest published lethal dose) of 600 mg/kg. It has been reported that the lethal oral dose is 15 to 30 grams. The toxicity of oxalic acid is due to kidney failure caused by precipitation of solid calcium oxalate.
Oxalate is known to cause mitochondrial dysfunction.
Ingestion of ethylene glycol results in oxalic acid as a metabolite which can also cause acute kidney failure.
Kidney stones
The most kidney stones, 76%, are composed of the calcium salt of oxalic acid. Oxalic acid can also cause joint pain by formation of similar precipitates in the joints. Calcium hydroxide decreases urinary oxalate in both humans and rats. Ingesting both calcium containing foods, such as milk, with food high in oxalic acid, cause the formation of calcium oxalate in the stomach, which is not absorbed into the body.
#1249 2022-01-10 00:05:49
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1225) Citric Acid
Citric acid was first isolated from lemon juice by Swedish chemist Carl Wilhelm Scheele in 1784 and is manufactured by fermentation of cane sugar or molasses in the presence of a fungus, Aspergillus niger. It is used in confections and soft drinks (as a flavouring agent), in metal-cleaning compositions, and in improving the stability of foods and other organic substances (by suppressing the deleterious action of dissolved metal salts).
Citric acid is an organic compound with the chemical formula HOC(CO2H)(CH2CO2H)2. It is a colorless weak organic acid. It occurs naturally in citrus fruits. In biochemistry, it is an intermediate in the citric acid cycle, which occurs in the metabolism of all aerobic organisms.
More than two million tons of citric acid are manufactured every year. It is used widely as an acidifier, as a flavoring, and a chelating agent.
Natural occurrence and industrial production
Chemical characteristics
Citric acid can be esterified at one or more of its three carboxylic acid groups to form any of a variety of mono-, di-, tri-, and mixed esters.
Cosmetics, pharmaceuticals, dietary supplements, and foods
Other uses
Citric acid is used as an odorless alternative to white vinegar for fabric dyeing with acid dyes.
Soldering flux. Citric acid is an excellent soldering flux, either dry or as a concentrated solution in water. It should be removed after soldering, especially with fine wires, as it is mildly corrosive. It dissolves and rinses quickly in hot water.
Although a weak acid, exposure to pure citric acid can cause adverse effects. Inhalation may cause cough, shortness of breath, or sore throat. Over-ingestion may cause abdominal pain and sore throat. Exposure of concentrated solutions to skin and eyes can cause redness and pain. Long-term or repeated consumption may cause erosion of tooth enamel.
#1250 2022-01-10 19:24:49
Registered: 2005-06-28
Posts: 39,685
Re: Miscellany
1226) Butterfly
Butterflies are often polymorphic, and many species make use of camouflage, mimicry, and aposematism to evade their predators. Some, like the monarch and the painted lady, migrate over long distances. Many butterflies are attacked by parasites or parasitoids, including wasps, protozoans, flies, and other invertebrates, or are preyed upon by other organisms. Some species are pests because in their larval stages they can damage domestic crops or trees; other species are agents of pollination of some plants. Larvae of a few butterflies (e.g., harvesters) eat harmful insects, and a few are predators of ants, while others live as mutualists in association with ants. Culturally, butterflies are a popular motif in the visual and literary arts. The Smithsonian Institution says "butterflies are certainly one of the most appealing creatures in nature".
Butterfly, (superfamily Papilionoidea), is any of numerous species of insects belonging to multiple families. Butterflies, along with the moths and the skippers, make up the insect order Lepidoptera. Butterflies are nearly worldwide in their distribution.
The wings, bodies, and legs, like those of moths, are covered with dustlike scales that come off when the animal is handled. Unlike moths, butterflies are active during the day and are usually brightly coloured or strikingly patterned. Perhaps the most distinctive physical features of the butterfly are its club-tipped antennae and its habit of holding the wings vertically over the back when at rest. The lepidopteran life cycle has four stages: egg, larva (caterpillar), pupa (chrysalis), and adult (imago). The larvae and adults of most butterflies feed on plants, often only specific parts of specific types of plants.
The butterfly families include: Pieridae, the whites and sulfurs, known for their mass migrations; Papilionidae, the swallowtails and parnassians; Lycaenidae, including the blues, coppers, hairstreaks, and gossamer-winged butterflies; Riodinidae, the metalmarks, found chiefly in the American tropics; Nymphalidae, the brush-footed butterflies; Hesperiidae, the skippers; and Hedylidae, the American moth-butterflies (sometimes considered a sister group to Papilionoidea). The brush-footed butterflies represent the largest and most diverse family and include such popular butterflies as the admirals, fritillaries, monarchs, zebras, and painted ladies.
Admiral, (subfamily Limentidinae), is any of several butterfly species in the family Nymphalidae (order Lepidoptera) that are fast-flying and much prized by collectors for their coloration, which consists of black wings with white bands and reddish brown markings. The migratory red admiral (Vanessa atalanta), placed in the subfamily Nymphalinae, is widespread in Europe, Scandinavia, North America, and North Africa and feeds on stinging nettles. The western, or Weidemeyer’s, admiral (Limenitis weidemeyerii) is found in the western United States. The white admiral (L. arthemis), a species made up of a white form and a red-spotted purple form, was once thought to be two distinct species. The white admiral occurs in North America and from Great Britain across Eurasia to Japan, feeds on honeysuckle. The Indian red admiral, V. indica, is found in the Canary Islands as well as India and is distinguished by a red band on the forewings wider than that of V. atalanta.
Painted lady
What is a butterfly?
Like all other insects, butterflies have six legs and three main body parts: head, thorax (chest or mid section) and abdomen (tail end). They also have two antennae and an exoskeleton.
The difference between a butterfly and a moth?
Both butterflies and moths belong to the same insect group called Lepidoptera. In general, butterflies differ from moths in the following ways: (1) Butterflies usually have clubbed antennae but moths have fuzzy or feathery antennae. (2) Butterflies normally are active during the daytime while most moths are active at night. (3) When a butterfly rests, it will do so with its wings held upright over its body. Moths, on the other hand, rest with their wings spread out flat. Butterflies will, however, bask with their wings out-stretched. (4) Butterflies are generally more brightly colored than moths, however, this is not always the case. There are some very colorful moths.
Butterfly life cycle
A life cycle is made up of the stages that a living organism goes through during its lifetime from beginning to end. A butterfly undergoes a process called complete metamorphosis during its life cycle. This means that the butterfly changes completely from its early larval stage, when it is a caterpillar, until the final stage, when it becomes a beautiful and graceful adult butterfly. The butterfly life cycle has four stages: egg, larva, pupa, and adult.
The first stage of the butterfly life cycle is the egg or ovum. Butterfly eggs are tiny, vary in color and may be round, cylindrical or oval. The female butterfly attaches the eggs to leaves or stems of plants that will also serve as a suitable food source for the larvae when they hatch.
The larva, or caterpillar, that hatches from the egg is the second stage in the life cycle. Caterpillars often, but not always, have several pairs of true legs, along with several pairs of false legs or prolegs. A caterpillar's primary activity is eating. They have a voracious appetite and eat almost constantly. As the caterpillar continues to eat, its body grows considerably. The tough outer skin or exoskeleton, however, does not grow or stretch along with the enlarging caterpillar. Instead, the old exoskeleton is shed in a process called molting and it is replaced by a new, larger exoskeleton. A caterpillar may go through as many as four to five molts before it becomes a pupa.
The third stage is known as the pupa or chrysalis. The caterpillar attaches itself to a twig, a wall or some other support and the exoskeleton splits open to reveal the chrysalis. The chrysalis hangs down like a small sack until the transformation to butterfly is complete. The casual observer may think that because the pupa is motionless that very little is going on during this "resting stage." However, it is within the chrysalis shell that the caterpillar's structure is broken down and rearranged into the wings, body and legs of the adult butterfly. The pupa does not feed but instead gets its energy from the food eaten by the larval stage. Depending on the species, the pupal stage may last for just a few days or it may last for more than a year. Many butterfly species overwinter or hibernate as pupae.
The fourth and final stage of the life cycle is the adult. Once the chrysalis casing splits, the butterfly emerges. It will eventually mate and lay eggs to begin the cycle all over again. Most adult butterflies will live only a week or two, while a few species may live as long as 18 months.
Butterfly activities
Butterflies are complex creatures. Their day-to-day lives can be characterized by many activities. If you are observant you may see butterflies involved in many of the follow activities. To observe some activities, such as hybernation, may involve some detective work. To observe other activities such as basking, puddling, or migrating, you will need to be at the proper place at the proper time. Keep an activity log and see how many different butterflies you can spot involved in each activity. The information from the individual butterfly pages may give you some hints as to where (or on what plants) some of these activities are likely to occur.
The larval or caterpillar stage and the adult butterfly have very different food preferences, largely due to the differences in their mouth parts. Both types of foods must be available in order for the butterfly to complete its life cycle.
Caterpillars are very particular about what they eat, which is why the female butterfly lays her eggs only on certain plants. She instinctively knows what plants will serve as suitable food for the hungry caterpillars that hatch from her eggs. Caterpillars don't move much and may spend their entire lives on the same plant or even the same leaf! Their primary goal is to eat as much as they can so that they become large enough to pupate. Caterpillars have chewing mouth parts, called mandibles, which enable them to eat leaves and other plant parts. Some caterpillars are considered pests because of the damage they do to crops. Caterpillars do not need to drink additional water because they get all they need from the plants they eat.
Adult butterflies are also selective about what they eat. Unlike caterpillars, butterflies can roam about and look for suitable food over a much broader territory. In most cases, adult butterflies are able to feed only on various liquids. They drink through a tube-like tongue called a proboscis. It uncoils to sip liquid food, and then coils up again into a spiral when the butterfly is not feeding. Most butterflies prefer flower nectar, but others may feed on the liquids found in rotting fruit, in ooze from trees, and in animal dung. Butterflies prefer to feed in sunny areas protected from wind.
Butterflies are cold-blooded, meaning they cannot regulate their own body temperature. As a result, their body temperature changes with the temperature of their surroundings. If they get too cold, they are unable to fly and must warm up their muscles in order to resume flight. Butterflies can fly as long as the air is between 60°-108° F, although temperatures between 82°-100° F are best. If the temperature drops too low, they may seek a light colored rock, sand or a leaf in a sunny spot and bask. Butterflies bask with their wings spread out in order to soak up the sun's heat.
When butterflies get too hot, they may head for shade or for cool areas like puddles. Some species will gather at shallow mud puddles or wet sandy areas, sipping the mineral-rich water. Generally more males than females puddle and it is believed that the salts and nutrients in the puddles are needed for successful mating.
Patrolling and perching
There are two methods that a male butterfly might use in order to search for a female mate. It might patrol or fly over a particular area where other butterflies are active. If it sees a possible mate, it will fly in for a closer look. Or, instead, it might perch on a tall plant in an area where females may be present. If it spots a likely mate, it will swoop in to investigate. In either case, if he finds a suitable female he will begin the mating ritual. If he finds another male instead, a fierce fight may ensue.
A male butterfly has several methods of determining whether he has found a female of his own species. One way is by sight. The male will look for butterflies with wings that are the correct color and pattern. When a male sights a potential mate it will fly closer, often behind or above the female. Once closer, the male will release special chemicals, called pheromones, while it flutters its wings a bit more than usual. The male may also do a special "courtship dance" to attract the female. These "dances" consist of flight patterns that are peculiar to that species of butterfly. If the female is interested she may join the male's dance. They will then mate by joining together end to end at their abdomens. During the mating process, when their bodies are joined, the male passes sperm to the female. As the eggs later pass through the female's egg-laying tube, they are fertilized by the sperm. The male butterfly often dies soon after mating.
After mating with a male, the female butterfly must go in search of a plant on which to lay her eggs. Because the caterpillars that will hatch from her eggs will be very particular about what they eat, she must be very particular in choosing a plant. She can recognize the right plant species by its leaf color and shape. Just to be sure, however, she may beat on the leaf with her feet. This scratches the leaf surface, causing a characteristic plant odor to be released. Once she is sure she has found the correct plant species, she will go about the business of egg-laying. While laying her eggs, they are fertilized with the sperm that has been stored in her body since mating. Some butterflies lay a single egg, while others may lay their eggs in clusters. A sticky substance produced by the female enables the eggs to stick where ever she lays them, either on the underside of a leaf or on a stem.
Butterflies are cold-blooded and cannot withstand winter conditions in an active state. Butterflies may survive cold weather by hibernating in protected locations. They may use the peeling bark of trees, perennial plants, logs or old fences as their overwintering sites. They may hibernate at any stage (egg, larval, pupal or adult) but generally each species is dormant in only one stage.
Another way that butterflies can escape cold weather is by migrating to a warmer region. Some migrating butterflies, such as the painted lady and cabbage butterfly, fly only a few hundred miles, while others, such as the monarch, travel thousands of miles.
Monarchs are considered the long-distance champions of butterfly migration, traveling as many as 4000 miles round trip. They begin their flight before the autumn cold sets in, heading south from Canada and the northern United States. Monarchs migrate to the warmer climates of California, Florida and Mexico, making the trip in two months or less and feeding on nectar along the way. Once arriving at their southern destination, they will spend the winter resting for the return flight. Few of the original adults actually complete the trip home. Instead, the females mate and lay eggs along the way and their offspring finish this incredible journey.
Butterflies and caterpillars are preyed upon by birds, spiders, lizards and various other animals. Largely defenseless against many of these hungry predators, Lepidoptera have developed a number of passive ways to protect themselves. One way is by making themselves inconspicuous through the use of camouflage.
Caterpillars may be protectively colored or have structures that allow them to seemingly disappear into the background. For example, many caterpillars are green, making them difficult to detect because they blend in with the host leaf. Some larvae, particularly those in the Tropics, bear a resemblance to bird droppings, a disguise that makes them unappealing to would-be predators.
The coloration and pattern of a butterfly's wings may enable it to blend into its surrounding. Some may look like dead leaves on a twig when they are at rest with their wings closed. The under wing markings of the comma and question mark butterflies help them to go unnoticed when hibernating in leaf litter.
Board footer
Powered by FluxBB |
17a3850badb21bdf |
Related image
the sixth day: Scripture added a “hey” on the sixth [day], at the completion of the Creation, to tell us that He stipulated with them, [“you were created] on the condition that Israel accept the Five Books of the Torah.” [The numerical value of the “hey” is five.] (Tanchuma Bereishith 1). Another explanation for “the sixth day”: They [the works of creation] were all suspended until the “sixth day,” referring to the sixth day of Sivan, which was prepared for the giving of the Torah (Shab. 88a). [The letter “hey” is the definite article, alluding to the definitive sixth day, the sixth day of Sivan, when the Torah was given]
What Rashi is saying here is that the world was suspended in the state of superposition of being created and not being created depending on the future choice to be made by Jewish people – to accept the Torah or not. Until that moment – the sixth day of Sivan – the world existed in a quantum-mechanical state of superposition of existence and not existence, as it were, just as a Schrödinger cat.
In quantum mechanics, a system (say, a particle) is described by a wave function (or wavefunction), that describes the distribution of probabilities to find the system (e.g., particle) in a particular area of space. The wavefunction obeys the Schrödinger equation that predicts the evolution of the wavefunction in time. Why the evolution of the wavefunction is deterministic, the predictions based on this equation are not deterministic. All we can predict is the probability of finding a particle in a particular area of space. However, when we conduct an experiment, we always get a definitive result for the measured property. This paradox is called the Measurement Paradox, which different competing interpretations of quantum mechanics strive to explain.
The Everett’s many-worlds interpretation of quantum mechanics postulates that a choice splits the universe into branches, where each of the possible outcomes is realized. In the orthodox Copenhagen interpretation of quantum mechanics, the choice causes the collapse of the wavefunction, whereby the plurality of possibilities is collapsed into a single actuality. Until such collapse, the system (e.g., particle) is in a state of superposition of all possible states. Thus, a Schrödinger cat exists in a state of superposition of being alive and dead at the same time, until its state is collapsed by the act of observation, whereby the observer “chooses,” albeit subconsciously, between two possible states.
According to Rashi, until the Sinaitic revelation, the world existed in a superposition of two states – existence and non-existence. “[The works of creation] were all suspended until the “sixth day.” It was the choice of the Jewish people who accepted the Torah that collapsed the wavefunction and broad the world into existence.
I don’t know if Rashi knew quantum mechanics, but his logic is clearly quantum-mechanical. This commentary is one of the most explicit examples of the quantum-mechanical concept of superposition and the collapsed the wavefunction as it is used in the Torah.
It gives me a great pleasure to thank our son, Elie, who pointed out to me this fascinating parallel. May the Almighty send him, Eliyahu Chaim Yitzchak ben Leah, refuah shaleimah!
Printer Friendly |
a030b70093f994e9 | The Life of Dr. Feynman – Physics Research Paper
Feynman received a bachelor’s degree from the Massachusetts Institute of Technology in 1939, and a PhD from Princeton University in 1942. His thesis advisor was John Archibald Wheeler. After Feynman completed his thesis on quantum mechanics, Wheeler showed it to Albert Einstein, but was unconvinced. While researching his Ph.D., Feynman married his first wife, Arline Greenbaum, who had been diagnosed with tuberculosis, a terminal illness at that time; they were careful, and Feynman never contracted TB.
At Princeton, the physicist Robert R. Wilson encouraged Feynman to participate in the Manhattan Project. This was a wartime U.S. Army project at Los Alamos developing the atomic bomb. He visited his wife in a sanitarium in Santa Fe on weekends, right up until her death in July 1945. He immersed himself in work on the project, and was present at the Trinity bomb test. Feynman claimed to be the only person to see the explosion without the dark glasses provided, looking through a truck windshield to screen out harmful ultraviolet frequencies.
As a junior physicist, his work on the project was relatively removed from the major action; consisting mostly of administering the computation group of human computers in the Theoretical division, and then, with Nicholas Metropolis, setting up the system for using IBM punch cards for computation. Feynman actually succeeded in solving one of the equations for the project which were posted on the blackboards. However “They didn’t do the physics right” and Feynman’s solution was not used in the project.
After the project, Feynman started working as a professor at Cornell University, where Hans Bethe, the formulator of nuclear fusion worked. However he felt uninspired there; despairing that he had burned out, he turned to more concrete problems, such as analyzing the physics of a twirling, nutating dish, as it is being balanced by a juggler. As it turned out, this work served him in future researches. He was therefore surprised to be offered professorships from competing universities, eventually choosing to work at the California Institute of Technology at Pasadena, California, despite being offered a position near Princeton, at the Institute for Advanced Study. What, at that time, included Albert Einstein on its list of elite faculty members.
Feynman rejected the Institute on the grounds that there were no teaching duties. Feynman found his students to be a source of inspiration and also, during uncreative times, comforting. He felt that if he could not be creative, at least he could teach.
Feynman is sometimes called the “Great Explainer”; he took great care when explaining topics to his students, making it a moral point not to make a topic arcane, but accessible to others. Thus clear thinking and clear presentation were fundamental prerequisites for his attention. It could be perilous to even approach him when unprepared, and he did not forget who the fools or pretenders were. On one sabbatical year, he returned to Newton’s Principia to study it anew; what he learned from Newton, he also passed along to his students, such as Newton’s attempted explanation of diffraction.
Feynman did much of his best work while at Caltech, including research in Quantum electrodynamics. The problem for which Feynman won his Nobel Prize involved the probability of quantum states changing. He helped develop a functional integral formulation of quantum mechanics, in which every possible path from one state to the next is considered, the final path being a sum over the possibilities. Physics of the superfluidity of supercooled liquid helium, where helium seems to display a lack of viscosity when flowing. Applying the Schrödinger equation to the question showed that the superfluid was displaying quantum mechanical behavior observable on a macroscopic scale. This helped with the problem of superconductivity. Weak decay, which shows itself in the decay of a neutron into an electron, a proton, and an anti-neutrino. Developed in collaboration with Murray Gell-Mann, the theory was of massive importance, and resulted in the discovery of a new force of nature.
He also developed Feynman diagrams, a bookkeeping device which helps in conceptualising and calculating interactions between particles in space-time. This device allowed him, and now others, to work with concepts which would have been less approachable without it, such as time reversibility and other fundamental processes. These diagrams are now fundamental for string theory and M-theory, and have even been extended topologically. Feynman’s mental picture for these diagrams started with the hard sphere approximation, and the interactions could be thought of as collisions at first. It was not until decades later that physicists thought of analyzing the nodes of the Feynman diagrams more closely. The world-lines of the diagrams have become tubes to better model the more complicated objects such as strings and M-branes.
From his diagrams of a small number of particles interacting in spacetime, Feynman could then model all of physics in terms of those particle’s spins and the range of coupling of the fundamental forces. But the quark model was a rival to Feynman’s parton formulation. Feynman did not dispute the quark model; for example, when the 5th quark was discovered, Feynman immediately pointed out to his students that the discovery implied the existence of a 6th quark, which was duly discovered in the decade after his death.
After the success of quantum electrodynamics, Feynman turned to quantum gravity. By analogy with the photon, which has spin 1, he investigated the consequences of a free massless spin 2 field, and was able to derive the Einstein field equation of general relativity, but little more. Unfortunately, at this time he became exhausted by working on multiple major projects at the same time, including his Lectures in Physics.
While at Caltech Feynman was asked to “spruce up” the teaching of undergraduates. After three years devoted to the task, a series of lectures was produced, eventually becoming the famous Feynman Lectures on Physics, which are a major reason that Feynman is still regarded by most physicists as one of the greatest teachers of physics ever. Feynman later won the Oersted Medal for teaching, of which he seemed especially proud. His students competed keenly for his attention; once he was awakened when a student solved the problem and dropped it in his mailbox at home; glimpsing the student sneaking across his lawn, he could not go back to sleep, and he read the student’s solution. That morning, at breakfast, he was again interrupted by a triumphant student, but he informed him that he was too late.
Feynman was a keen and influential popularizer of physics in both his books and lectures, notably a talk on nanotechnology called Plenty of Room at the Bottom. Feynman offered $1000 prizes for two of his challenges in nanotechnology. He was also one of the first scientists to realise the possibility of quantum computers. Though he never actually wrote any books, many of his lectures and other miscellaneous talks were turned into books such as The Character of Physical Law and QED: The Strange Theory of Light and Matter. He would give lectures which his students would annotate into books, such as Statistical Mechanics and Lectures on Gravity. The Lectures on Physics took a physicist, Robert B. Leighton, as full-time editor a number of years.
Feynman travelled a lot, notably to Brazil, and near the end of his life schemed to visit the obscure Russian land of Tuva, a dream that, due to Cold War bureaucratic problems, never succeeded. During this period he discovered that he had a form of cancer, but, thanks to surgery, he managed to hold it off.
Feynman had very liberal views on sexuality and was not ashamed of admitting it. In Surely You’re Joking, Mr. Feynman!, he explains that he enjoyed hostess bars and topless dancing, and drew a decoration for a massage parlor. He also explains how he learned to play drums in acceptable samba style in Brazil (by persistence and practice). Such actions got him a reputation of eccentricity. In addition, he considered using cannabis as well as LSD because he wished to know effects of hallucinations.
Feynman served on the commission investigating the 1986 Challenger disaster. “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”Feynman was requested to serve on the presidential Rogers Commission which investigated the Challenger disaster of 1986. Tactfully fed clues from a source with inside information, Feynman famously showed on television the crucial role in the disaster played by the booster’s O-ring flexible gas seals with a simple demonstration using a glass of ice water and a sample of o-ring material. His opinion of the cause of the accident differed from the official findings, and were considerably more critical of the role of management in sidelining the concerns of engineers. After much petitioning, Feynman’s minority report was included as an appendix to the official document. The book What Do You Care What Other People Think? includes stories from Feynman’s work on the commission. His engineering skill is reflected in his estimate of the reliability of the Space Shuttle (98%), which is unhappily reflected in the 2 failures over the 100-odd flights of the Space Shuttle as of 2003. However good he was at engineering, Feynman always drew a careful distinction between science and technology.
The cancer returned in 1987, with Feynman entering hospital a year later. Complications with surgery worsened his condition, whereupon Feynman decided to die with dignity and not accept any more treatment. He died on February 15, 1988.
Works Cited
Mark Martin. “Biography.” Feyman Online (2004):
Unknown Author. “Richard P. Feynman – Biography.” (9-24-2004):
J J O’Connor and E F Robertson, “Richard Phillips Feynman.” (8-2002): |
c4c09f1d03e3cd0b | Scientific journal
European Journal of Natural History
ISSN 2073-4972
ИФ РИНЦ = 0,372
Yakubovskiy E.G.
In this part of the article is described the physical meaning of complex velocity. The relationship between the Schrödinger equation and the Navier – Stokes equation obtain. Schrödinger equation and the Navier – Stokes equations are, in general, have a countable number of turbulent energy and solutions.
large dimensionless unknown functions
solutions of non-linear partial differential equation
Navier – Stokes equation
turbulent function
fluid flow resistance coefficient
round pipeline
Physical Meaning of Complex Solution
Let us explain physical meaning of the complex turbulent solution. So, we will consider real solution of ordinary differential equations system xα(t).
Let us assume that initial data have an average value yakubovsk342.wmf and mean root square yakubovsk343.wmf.
Mean root square of initial data for Navier – Stokes equation is defined by surface roughness or by initial data which are not precisely defined. Then, for mean root square of the solution we have
yakubovsk345.wmf (1)
Here I will provide the formulation of inverse Pythagoras theorem. For any three of positive numbers a, b and c, such that a2 + b2 = c2, there is a rectangular triangle with legs a and b and hypotenuse c. Hence, mathematical mean value and mean square deviation form legs and hypotenuse is an average square root of the value. That is, average yakubovsk346.wmf is orthogonal to mean square deviation yakubovsk347.wmf which forms imaginary part of the body coordinate. Thus, the Cartesian space with oscillatory high frequency velocity (period of fluctuation is less than measurement time), obtained as a result of averaging in time, becomes complex space. That is, in case of large mean root square of the real space, it should be considered as complex three-dimensional space where imaginary part corresponds to mean square deviation. At the same time, there is following relation between variables yakubovsk348.wmf yakubovsk349.wmf, and the complex number α is chosen in such a way that the imaginary part had positive or negative value. Mean square deviation satisfies this condition. But sometimes the mean square deviation is positive, for example, in case of dielectric permeability where positive and negative charges have an influence. In this case we have a formula yakubovsk350.wmf where real part is proportional to positive mean square dipole deviation and conductivity is proportional to average value. But conductivity is divided by frequency which has positive and negative sign.
Therefore, algorithm for finding of average solution or average solution in phase space and its mean root square is reduced to finding of complex solution. The average solution corresponds to real part of solution, and second power of complex part corresponds to mean root square of the solution. This is physical meaning of complex solution, real part is an average solution, and imaginary part is a mean square deviation. And real and imaginary parts are orthogonal and form complex space. Really, according to inverse Pythagoras theorem, due to formula (1) mathematical mean value and mean square deviation form legs and average square is a hypotenuse.
Here we would like to note that when calculating the flow motion and one term of a series is taken into account, it is necessary to take square root of imaginary part as forward velocity is calculated. The imaginary part corresponds to square root of oscillatory part of dimensionless velocity.
This situation is similar to calculation of deviation at random choice of forward or back step with probability 1/2 and the point position after N steps is defined by yakubovsk351.wmf.
Real and imaginary parts of the solution are located on different axes of complex space. But if you average imaginary dimensionless part, you will have
And the solution is equal to the module of the last value and, for different roughness, the imaginary part of the solution should be multiplied by averaging multiplier. At the same time, if all coefficients of a series in non-linear equations system are calculated, it is not necessary to calculate square root of imaginary part. It is necessary to summarize complex values and to calculate module of the sum.
Now we will show that the imaginary part of complex derivative of coordinate in phase space of the differential equation forms pulsing coordinate motion in phase space, i.e. in space of variables
Average values are used for variables as, at molecular level, the medium is not smooth.
Lemma. Complex solution yields fluctuating pulsing function of flow motion coordinates.
The imaginary part of velocity corresponds to rotation speed in phase space. As rotation radius is known, it is also possible to determine rotation frequency. In the rotation plane, complex velocity with constant rotation radius and constant frequency can be written in the form
In case of varying over the space stationary speed, locally, this formula can be written for one plane as
and frequency is dependent on time as the phase shift is provided as a result of harmonic oscillations in neighboring points. Sum of harmonic oscillations with different time-dependent frequencies defines pulsing mode in phase space at stationary complex velocity. That is, this complex velocity defines the coordinates of phase space points pulsing in time. The situation is similar to existence of several stationary vortexes defining the pulsing rotation of the flow.
Lemma 6. Three-dimensional flow velocity can be written in the form
yakubovsk356.wmf yakubovsk357.wmf
And velocity is defined in the form of integral of tangent acceleration by formula
Integral of perpendicular component of acceleration defines perpendicular component of velocity by formula
At that value of local velocity is yakubovsk363.wmf yakubovsk364.wmf. But value of velocity obtained as a result of integration of centripetal acceleration is not zero yakubovsk365.wmf, but this velocity become equal to zero for the same initial point, at constant particle velocity and constant curvature radius with rotation period yakubovsk366.wmf, where R – curvature radius. For variable particle velocity depending on time, when one of the integrals yakubovsk367.wmf, which, at finite curvature radius of one sign of the trajectory, is finite and equal to
yakubovsk368.wmf yakubovsk369.wmf
as tangential direction tl, changes sign in the course of rotation.
Tangential acceleration is defined by formula
Direction of velocities ΔVtl, ΔVnl is orthogonal, their sum yields increase of motion velocity module
as yakubovsk372.wmf
Components of these projections, differentiable with respect to time, define tangential and orthogonal accelerations. At the same time, concepts of tangential and orthogonal velocities are entered which, in the Cartesian space, are not orthogonal to (Vt, Vl) ≠ 0, but in six-measured complex space they are orthogonal, and their module of complex vector Vl = Vtl + iVnl is equal to
It can be proved by use of expression yakubovsk374.wmf yakubovsk375.wmf and calculation of module as product of complex conjugate vectors taking into account orthogonality of six real unit vectors.
Thus, solution of Navier – Stokes equations for not multiple balance positions is obtained. It is defined by expressions
yakubovsk377.wmf l = 1, ..., 2N;
where values yakubovsk379.wmf are coordinates of balance positions.
Laminar solution corresponds to the solution of linear problem with convective term averaging; structure of turbulent solution is
where gnk(t) – known defined continuous function, value of yakubovsk381.wmf is defined from initial conditions, and yakubovsk382.wmf. At that, the solution contains a lot of poles which, for real solution and real initial data, yield infinity.
At real time and complex initial conditions which define complex value of yakubovsk383.wmf, and, as g(t) is real, the complex solution is finite.
At that, formula
yakubovsk384.wmf l = 1, ..., N. (2)
may have branching points in which the solution continuously passes into other branch of the solution. This does not contradict the theorem of solution uniqueness for Cauchy problem as the left part of the differential equation tends to infinity in branching point. Derivative of right part of ordinary differential equation also tends to infinity in branching point. So we have a point of discontinuous solution. But this solution can be continued by a formula (2).
This situation is similar to Schrodinger equation when generally we have finite number of solutions. It is not surprising as Schrodinger equation can be reduced to Navier – Stokes equation. Now we will prove it. For this we will write down Schrodinger equation and will transform it using equality
Dividing the equation by mass mψ we obtain the equation
Now we will write a private derivative equation, will take a gradient of both parts of equation and will enter real velocity to the formula
Substituting velocity value into transformed Schrodinger equation, we have
yakubovsk390.wmf yakubovsk391.wmf
Now we have three-dimensional Navier – Stokes equation with pressure corresponding to potential. Nevertheless, the hydrodynamic problem differs from the equation of Navier – Stokes derived from Schrodinger equation and continuity equation.
At the same time it is possible to draw an analogy between laminar single-value mode and free, single-value description of bodies.
Between turbulent mode, having finite number of solutions, and description of bound particles having finite number of solutions. In case of turbulent complex and laminar real modes there is a boundary between them and critical Reynolds number. The similar boundary is available between free and bound particles description, which corresponds to energy transition from negative to positive state. In turn, Navier – Stokes equation has to have discrete energy levels of turbulent flow states, transitions between these states with energy emission or absorption have to be realized.
The boundary between free particles description and bound particles description can be defined, this is transition to complex quantum number or to infinity of the main quantum number of hydrogen atom. At that, infinite quantum number of hydrogen atom, passing through zero value of expression 1/n2 where n – main quantum number, becomes imaginary and continuous. Wave function of free motion, which is continuous at continuous energy, corresponds to laminar solution of hydrodynamic problem for which single valued solution exists. And for large quantum number, the system is quasi-classical, i.e. for quantum number which is close to boundary (quantum number is equal to infinity) system is almost classical.
And there is a boundary between free solution and solution which describes bound states. This is zero energy value and, likewise non-linear private derivatives equations, boundary exists between turbulent complex solution and laminar real solution. |
c34d2a6ac299e946 | Customer Review Essay
Custom Student Mr. Teacher ENG 1001-04 14 November 2016
Customer Review
1. Describe the Michelson Morley experiment and discuss the importance of its negative result. 2. Calculate the fringe shift in Michelson-Morley experiment. Given that: [pic], [pic], [pic], and [pic]. 3. State the fundamental postulates of Einstein special theory of relativity and deduce from them the Lorentz Transformation Equations . 4. Explain relativistic length contraction and time dilation in special theory of relativity? What are proper length and proper time interval? 5. A rod has length 100 cm. When the rod is in a satellite moving with velocity 0.9 c relative to the laboratory, what is the length of the rod as measured by an observer (i) in the satellite, and (ii) in the laboratory?.
6. A clock keeps correct time. With what speed should it be moved relative to an observer so that it may appear to lose 4 minutes in 24 hours? 7. In the laboratory the ‘life time’ of a particle moving with speed 2.8x108m/s, is found to be 2.5×10-7 sec. Calculate the proper life time of the particle. 8. Derive relativistic law of addition of velocities and prove that the velocity of light is the same in all inertial frame irrespective of their relative speed. 9. Two particles come towards each other with speed 0.9c with respect to laboratory.
Calculate their relative speeds. 10. Rockets A and B are observed from the earth to be traveling with velocities 0.8c and 0.7 c along the same line in the same direction. What is the velocity of B as seen by an observer on A? 11. Show that the relativistic invariance laws of conservation of momentum leads to the concept of variation of mass with speed and mass energy equivalence. 12. A proton of rest mass [pic] is moving with a velocity of 0.9c. Calculate its mass and momentum.
(Module1: Special Theory of Relativity)
13. The speed of an electron is doubled from 0.2 c to 0.4 c. By what ratio does its momentum increase? 14. A particle has kinetic energy 20 times its rest energy. Find the speed of the particle in terms of ‘c’. 15. Dynamite liberates about 5.4×106 J/Kg when it explodes. What fraction of its total energy is in this amount? 16. A stationary body explodes into two fragments each of mass 1.0 Kg that move apart at speeds of 0.6 c relative to the original body. Find the mass of the original body. 17. At what speed does the kinetic energy of a particle equals its rest energy? 18. What should be the speed of an electron so that its mass becomes equal to the mass of proton? Given: mass of electron=9.1×10-31Kg and mass of Proton =1.67×10-27Kg.
19. An electron is moving with a speed 0.9c. Calculate (i) its total energy and (ii) the ratio of Newtonian kinetic energy to relativistic energy. Given: [pic] and[pic]. 20. (i) Derive a relativistic expression for kinetic energy of a particle in terms of momentum. (ii) Show that the momentum of a particle of rest mass [pic] and kinetic energy [pic], is given by[pic]. 21. Find the momentum (in MeV/c) of an electron whose speed is 0.60 c. Verify that v/c = pc/E TUTORIAL SHEET: 2(a)
(Module2: Wave Mechanics)
1. What do you understand by the wave nature of matter? Obtain an expression of de Broglie wavelength for matter waves. 2. Calculate the de-Broglie wavelength of an electron and a photon each of energy 2eV. 3. Calculate the de-Broglie wavelength associated with a proton moving with a velocity equal to 1/20 of the velocity of light. 4. Show that the wavelength of a 150 g rubber ball moving with a velocity of [pic] is short enough to be determined.
5. Energy of a particle at absolute temperature T is of the order of [pic]. Calculate the wavelength of thermal neutrons at[pic]. Given: [pic], [pic] and [pic]. 6. Can a photon and an electron of the same momentum have the same wavelengths? Calculate their wavelengths if the two have the same energy. 7. Two particles A and B are in motion. If the wavelength associated with particle A is [pic], calculate the wavelength of the particle B if its momentum is half that of A. 8. Show that when electrons are accelerated through a potential difference V, their wavelength taking relativistic correction into account is [pic] , where e and [pic] are charge and rest mass of electrons, respectively. 9. A particle of rest mass m0 has a kinetic energy K. Show that its de Broglie wavelength is given by [pic] TUTORIAL SHEET: 2(a)
(Module2: Wave Mechanics)
16. Explain Heisenberg uncertainty principle. Describe gamma ray microscope experiment to establish Heisenberg uncertainty principle. 17. How does the Heisenberg uncertainty principle hint about the absence of electron in an atomic nucleus? 18. Calculate the uncertainty in momentum of an electron confined in a one-dimensional box of length[pic]. Given:[pic]
(Module 2: Wave Mechanics)
1. Differentiate between Ψ and IΨI2. Discuss Born postulate regarding the probabilistic interpretation of a wave function. 2. Write down the set of conditions which a solution of Schrödinger wave equation satisfies to be called a wave function. 3. What do you mean by normalization and orthogonality of a wave function? 4. Show that if potential energy V(x) is changed everywhere by a constant, the time independent wave equation is unchanged. What is the effect on the energy Eigen values? 5. Show that[pic], where [pic]the reduced mass and B is the binding energy of the particles. 6. Show that [pic]is an acceptable eigen function, where k is some finite constant. Also normalize it over the region[pic].
7. Explain the meaning of expectation value of x. write down the Eigen operators for position, linear momentum and total energy. 8. Show that time independent Schrödinger equation is an example of Eigen value equation. 9. Derive the time independent Schrödinger equation from time dependent equation for free particle. 10. For a free particle, show that Schrödinger wave equation leads to the de-Broglie relation [pic]. 11. Derive expression for probability current density or particle flux. Also , show that the probability density ρ and probability current density [pic] satisfy the continuity equation[pic]
(Module 2: Wave Mechanics)
12. Write Schrödinger equation for a particle in a box and determine expression for energy Eigen value and Eigen function. Does this predict that the particle can possess zero energy? 13. Find the expectation values of the position and that of momentum of a particle trapped in a one dimensional rigid box of length L. 14. The potential function of a particle moving along positive x-axis is given by V(x) = 0for x < 0
V(x) = V0for x [pic] 0
Calculate the reflectance R and transmittance T at the potential discontinuity and show that R+T=1. 15. An electron is bounded by a potential which closely approaches an infinite square well of width[pic]. Calculate the lowest three permissible quantum energies the electron can have. 16. A particle is moving in one dimensional box and its wave function is given by [pic]. Find the expression for the normalized wave function.
17. Calculate the value of lowest energy of an electron moving in a one-dimensional force free region of length 4[pic]. 18. A particle of mass [pic]kg is moving with a speed of [pic] in a box of length[pic]. Assume this to be one dimensional square well problem, calculate the value of n. 19. A beam of electron impinges on an infinitely wide energy barrier of height 0.03 eV, find the fraction of electrons reflected at the barrier if the energy of the electrn is (a) 0.025 eV (b) 0.030 eV (c) 0.040 eV
(Module 3: Atomic Physics)
1. What are the essential features of Vector Atom model? Also discuss the quantum numbers associated with this model. 2. For an electron orbit with quantum number l = 2, state the possible values of the components of total angular momentum along a specified direction. 3. Differentiate between L-S coupling (Russel-Saunders Coupling) and j-j coupling schemes. 4. Find the possible value of J under L-S and j-j coupling scheme if the quantum number of the two electrons in a two valence electron atom are n1 = 5 l1 = 1 s1 =1/2 n2 = 6 l2 = 3 s2 = 1/2
5. Find the spectral terms for 3s 2d and 4p 4d configuration. 6. Applying the selection rule, show which of the following transitions are allowed and not allowed D5/2 [pic] P3/2; D3/2 [pic] P3/2 ; D3/2 [pic] P1/2 ; P3/2 [pic] S1/2 ; P1/2 [pic] S1/2
7. What is Paschen back effect? Show that in a strong magnetic field, anomalous Zeeman pattern changes to normal Zeeman pattern. 8. Why does in normal Zeeman effect a singlet line always splitted into three components only. 9. Illustrate Zeeman Effect with the example of Sodium D1 and D2 lines. 10. An element under spectroscopic examination is placed in a magnetic field of flux density 0.3 Web/m2. Calculate the Zeeman shift of a spectral line of wavelength 450 nm. 11. The Zeeman components of a 500 nm spectral line are 0.0116 nm apart when the magnetic field is 1.0 T. Find the ratio (e/m) for the electron. 12. Calculate wavelength separation between the two component lines which are observed in Normal Zeeman effect, where – the magnetic field used is 0.4 weber/m2 , the specific charge- 1.76x1011Coulomb/kg and λ=6000[pic]. TUTORIAL SHEET: 3(b)
(Module 3: Atomic Physics)
1. Distinguish between spontaneous and stimulated emission. Derive the relation between the transition probabilities of spontaneous and stimulated emission. 2. What are the characteristics of laser beams? Describe its important applications. 3. Calculate the number of photons emitted per second by 5 mW laser assuming that it emits light of wavelength 632.8 nm. 4. Explain (a) Atomic excitations (b) Transition process (c) Meta stable state and (d) Optical pumping. 5. Find the intensity of laser beam of 15 mW power and having a diameter of 1.25 mm. Assume the intensity to be uniform across the beam.
6. Calculate the energy difference in eV between the energy levels of Ne-atoms of a He-Ne laser, the transition between which results in the emission of a light of wavelength 632.8nm. 7. What is population inversion? How it is achieved in Ruby Laser? Describe the construction of Ruby Laser. 8. Explain the operation of a gas Laser with essential components. How stimulated emission takes place with exchange of energy between Helium and Neon atom? 9. What is the difference between the working principle of three level and four level lasers? Give an example of each type. 10. How a four level Laser is superior to a three level Laser? TUTORIAL SHEET: 3(c)
(Module 3: Atomic Physics)
1. Distinguish between continuous X-radiation and characteristic X-radiation spectra of the element. 2. An X ray tube operated at 100 kV emits a continuous X ray spectrum with short wavelength limit λmin = 0.125[pic]. Calculate the Planck’s constant. 3. State Bragg’s Law. Describe how Bragg’s Law can be used in determination of crystal structure? 4. Why the diffraction effect in crystal is not observed for visible light. 5. Electrons are accelerated by 344 volts and are reflected from a crystal. The first reflection maxima occurs when glancing angle is 300 . Determine the spacing of the crystal. (h = 6.62 x 10-34 Js , e = 1.6 x 10-19 C and m = 9.1 x10-31 Kg) 6. In Bragg’s reflection of X-rays, a reflection was found at 300 glancing angle with lattice planes of spacing 0.187nm. If this is a second order reflection. Calculate the wavelength of X-rays.
7. Explain the origin of characteristic X-radiation spectra of the element. How Mosley’s law can explained on the basis of Bohr’s model. 8. What is the importance of Mosley’s law? Give the important differences between X-ray spectra and optical spectra of an element? 9. Deduce the wavelength of [pic] line for an atom of Z = 92 by using Mosley’s Law. (R= 1.1 x 105 cm-1). 10. If the Kα radiation of Mo (Z= 42) has a wavelength of 0.71[pic], determine the wavelength of the corresponding radiation of Cu (Z= 29). 11. The wavelength of Lα X ray lines of Silver and Platinum are 4.154 [pic]and 1.321[pic], respectively. An unknown substance emits of Lα X rays of wavelength 0.966[pic]. The atomic numbers of Silver and Platinum are 47 and 78 respectively. Determine the atomic number of the unknown substance. TUTORIAL SHEET: 4(a)
(Module 4: Solid State Physics)
1. Discuss the basic assumptions of Sommerfeld’s theory for free electron gas model of metals? 2. Define the Fermi energy of the electron. Obtain the expression for energy of a three dimensional electron gas in a metal. 3. Prove that at absolute zero, the energy states below Fermi level are filled with electrons while above this level, the energy states are empty. 4. Show that the average energy of an electron in an electron gas at absolute zero temperature is 3/5[pic], where[pic], is Fermi energy at absolute zero. 5. Prove that Fermi level lies half way down between the conduction and valence band in intrinsic semiconductor. 6. Find the Fermi energy of electrons in copper on the assumption that each copper atom contributes one free electron to the electron gas. The density of copper is 8.94(103 kg/m3 and its atomic mass is 63.5 u.
7. Calculate the Fermi energy at 0 K for the electrons in a metal having electron density 8.4x1028m-3. 8. On the basis of Kronig – Penney model, show that the energy spectrum of electron in a linear crystalline lattice consists of alternate regions of allowed energy and forbidden energy. 9. Discuss the differences among the band structures of metals, insulators and semiconductors. How does the band structure model enable you to better understand the electrical properties of these materials?
10. Explain how the energy bands of metals, semiconductors and insulators account for the following general optical properties: (a) Metals are opaque to visible light, (b) Semiconductors are opaque to visible light but transparent to infrared, (c) Insulator such as diamond is transparent to visible light. 11. Discuss the position of Fermi energy and conduction mechanism in N and P-type extrinsic semiconductors. TUTORIAL SHEET: 4(b)
(Module 4: Solid State Physics)
1. What do you mean by superconductivity? Give the elementary properties of superconductors. 2. Discuss the effect of magnetic field on a superconductor. How a superconductor is different from a normal conductor. 3. Discuss the effect of the magnetic field on the superconducting state of type I and type II superconductors. 4. What are the elements of the BCS theory? Explain the formation of Cooper pairs. 5. Explain the phenomena of Meissner effect and zero resistivity with the help of BCS theory. 6. The metals like gold, silver, copper etc. do not show the superconducting properties, why?
7. Describe the V-I characteristics of p-n junction diode. What do you understand by drift and diffusion current in the case of a semiconductor? 8. Explain the working and characteristics of a photodiode by using I-V curve. 9. Describe the phenomena of carrier generation and recombination in a semiconductor. 10. Define the phenomenon of photoconduction in a semiconductor. Deduce the relation between the wavelength of photon required for intrinsic excitation and forbidden energy gap of semiconductor. 11. Establish the relation between load current and load voltage of a solar cell. Describe the applications of solar cell in brief.
Free Customer Review Essay Sample
• Subject:
• University/College: University of Chicago
• Type of paper: Thesis/Dissertation Chapter
• Date: 14 November 2016
• Words:
• Pages:
Let us write you a custom essay sample on Customer Review
for only $16.38 $13.9/page
your testimonials |
260700140c973dbf | This Quantum World/Serious illnesses/Schroedinger
From Wikibooks, open books for an open world
Jump to: navigation, search
If the electron is a standing wave, why should it be confined to a circle? After de Broglie's crucial insight that particles are waves of some sort, it took less than three years for the mature quantum theory to be found, not once, but twice. By Werner Heisenberg in 1925 and by Erwin Schrödinger in 1926. If we let the electron be a standing wave in three dimensions, we have all it takes to arrive at the Schrödinger equation, which is at the heart of the mature theory.
Let's keep to one spatial dimension. The simplest mathematical description of a wave of angular wavenumber and angular frequency (at any rate, if you are familiar with complex numbers) is the function
Let's express the phase in terms of the electron's energy and momentum
The partial derivatives with respect to and are
We also need the second partial derivative of with respect to :
We thus have
In non-relativistic classical physics the kinetic energy and the kinetic momentum of a free particle are related via the dispersion relation
This relation also holds in non-relativistic quantum physics. Later you will learn why.
In three spatial dimensions, is the magnitude of a vector . If the particle also has a potential energy and a potential momentum (in which case it is not free), and if and stand for the particle's total energy and total momentum, respectively, then the dispersion relation is
By the square of a vector we mean the dot (or scalar) product . Later you will learn why we represent possible influences on the motion of a particle by such fields as and
Returning to our fictitious world with only one spatial dimension, allowing for a potential energy , substituting the differential operators and for and in the resulting dispersion relation, and applying both sides of the resulting operator equation to we arrive at the one-dimensional (time-dependent) Schrödinger equation:
In three spatial dimensions and with both potential energy and potential momentum present, we proceed from the relation substituting for and for The differential operator is a vector whose components are the differential operators The result:
where is now a function of and This is the three-dimensional Schrödinger equation. In non-relativistic investigations (to which the Schrödinger equation is confined) the potential momentum can generally be ignored, which is why the Schrödinger equation is often given this form:
The free Schrödinger equation (without even the potential energy term) is satisfied by (in one dimension) or (in three dimensions) provided that equals which is to say: However, since we are dealing with a homogeneous linear differential equation — which tells us that solutions may be added and/or multiplied by an arbitrary constant to yield additional solutions — any function of the form
with solves the (one-dimensional) Schrödinger equation. If no integration boundaries are specified, then we integrate over the real line, i.e., the integral is defined as the limit The converse also holds: every solution is of this form. The factor in front of the integral is present for purely cosmetic reasons, as you will realize presently. is the Fourier transform of which means that
The Fourier transform of exists because the integral is finite. In the next section we will come to know the physical reason why this integral is finite.
So now we have a condition that every electron "wave function" must satisfy in order to satisfy the appropriate dispersion relation. If this (and hence the Schrödinger equation) contains either or both of the potentials and , then finding solutions can be tough. As a budding quantum mechanician, you will spend a considerable amount of time learning to solve the Schrödinger equation with various potentials. |
8c70309280213362 | Dismiss Notice
Join Physics Forums Today!
Nuclei Shell Energy
1. Nov 6, 2008 #1
In the nuclei Shell Model I understand the nomenclature for the shell sequence but I don’t know how to calculate the respective energy levels for each shell.
For example how do you calculate the energy level for
1g(7/2) or 3d(5/2)
Pointing me to an online reference will also be helpful. However, I have not learned bra-kets or Hamiltonian math yet.
2. jcsd
3. Nov 6, 2008 #2
User Avatar
Science Advisor
Homework Helper
what is Hamiltonian math? :-p
The energy levels are different for different nuclei, you first take an approriate form of the central potential (V_c), e.g Wood Saxon.
Then you that the potential includes an [tex]\vec{l}\cdot \vec{s}[/tex] term (spin-orbit coupling), your potential is thus:
V(r) = V_c(r) + V_{ls} \vec{l}\cdot \vec{s}
where, of course:
[tex]V_{ls} = const. \dfrac{1}{r}\dfrac{\partial}{\partial r} [/tex]
Then you take your nuclei, find the excited levels and their energies, fit to the parameters of the V_c and starts to solve the Schrödinger equation (nummerically), and thus you obtain all the energies.
This is a VERY sketchy idea how to do it, Nuclear many body physics is quite complicated..
Similar Discussions: Nuclei Shell Energy
1. Collapsed Nuclei (Replies: 3)
2. Interaction of Nuclei (Replies: 1) |
3b9c4bf59f7d3ddb | Skip to main content
Chemistry LibreTexts
5.1: The Free Particle
We obtain the Schrödinger equation for the free particle using the following steps. First write
\[\hat {H} \psi = E \psi \label {5-1}\]
Next define the Hamiltonian,
\[\hat {H} = \hat {T} + \hat {V} \label {5-2}\]
and substitute
\[\hat {V} = 0 \label {5-3}\]
\[ \hat {T} = -\dfrac {\hbar ^2}{2m} \dfrac {d^2}{dx^2} \label {5-4} \]
to obtain
\[-\dfrac {\hbar ^2}{2m} \dfrac {d^2 \psi (x)}{dx^2} = E \psi (x) \label {5-5}\]
A major problem in Quantum Mechanics is finding solutions to differential equations, e.g. Equation \(\ref{5-5}\). Differential equations arise because the operator for kinetic energy includes a second derivative. We will solve the differential equations for some of the more basic cases, but since this is not a course in Mathematics, we will not go into all the details for other more complicated cases. The solutions that we consider in the greatest detail will illustrate the general procedures and show how the physical concept of quantization arises mathematically.
We already encountered Equation \(\ref{5-5}\) in the last chapter Chapter 4. There, we used our knowledge of some basic functions to find the solution. Now we solve this equation by using some algebra and mathematical logic.
Rearrange Equation \(\ref{5-5}\) and make the substitution
\[k^2 = \dfrac {2mE}{\hbar {h}^2}.\]
This substitution is only one way of making a simplification. You could use
\[\alpha = \dfrac {2mE}{\hbar ^2}\]
but then you would find later that \((\alpha)^{1/2}\) corresponds to the wave vector k which equals \(\frac {2 \pi}{\lambda}\) and \(\frac {P}{\hbar}\). So choosing \(k^2\) here is a choice made with considerable foresight. Trial-and-error is one method scientists use to solve problems, and the results often look sophisticated and insightful after they have been found, like choosing \(k^2\) rather than \(α\).
Since \(E\) is the kinetic energy,
\[ E = \dfrac {P^2}{2m} \label {5-6}\]
and we saw in previous chapters that the momentum \(p\) and the wave vector \(k\) are related,
\[p = \hbar k \label {5-7}\]
we also could recognize that \(\dfrac {2mE}{\hbar ^2}\) is just \(k^2\) as shown here in Equation \(\ref{5-8}\).
\[ \dfrac {2mE}{\hbar ^2} = \left (\dfrac {2m}{\hbar ^2}\right ) \left ( \dfrac {p^2}{2m} \right ) = \left (\dfrac {2m}{\hbar ^2}\right ) \left ( \dfrac {\hbar ^2 k^2}{2m} \right ) = k^2 \label {5-8}\]
The result for Equation \(\ref{5-5}\) after rearranging and substitution of result from Equation \(\ref{5-8}\) is
\[\left ( \dfrac {d^2}{dx^2} + k^2 \right ) \psi (x) = 0 \label {5-9}\]
This linear second-order differential equation can be solved in the same way that an algebraic equation is solved. It is separated into two factors, and each is set equal to 0. This factorization produces two first-order differential equations that can be integrated. The details are shown in the following equations.
Exercise \(\PageIndex{1}\)
Show that the operator \((\dfrac {d^2}{dx^2} + k^2)\) equals \((\dfrac {d}{dx} + ik) (\dfrac {d}{dx} - ik)\) and that the two factors commute since k does not depend on x. The answer is Equation \(\ref{5-10}\).
\[\left(\dfrac {d}{dx} - ik\right) \left(\dfrac {d}{dx} + ik\right) \psi (x) = \left(\dfrac {d}{dx} + ik\right) \left(\dfrac {d}{dx} - ik \right) \psi (x) = 0 \label {5-10}\]
Equation \(\ref{5-10}\) will be true if either
\[ \left( \dfrac {d}{dx} + ik \right) \psi (x) = 0\]
Rearranging and designating the two equations and the two solutions simultaneously by a + sign and a – sign produces
\[\dfrac {d \psi _{ \pm} (x) }{\psi _{\pm} (x)} = \pm ikdx \label {5-12}\]
which leads to
\[\ln \psi _\pm (x) = \pm ikx + C _{\pm} \label {5-13}\]
and finally
\[\psi _{\pm} (x) = A_{\pm} e^{\pm ikx} \label {5-14}\]
The constants \(A_+\) and \(A_-\) result from the constant of integration. The values of these constants are determined by some physical constraint that is imposed upon the solution. Such a constraint is called a boundary condition. For the particle in a box, discussed previously, the boundary condition is that the wavefunction must be zero at the boundaries where the potential energy is infinite. The free particle does not have such a boundary condition because the particle is not constrained to any one place. Another constraint is normalization, and here the integration constants serve to satisfy the normalization requirement.
Figure \(\PageIndex{1}\): Propagation of free particle waves in 1d - real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the color opacity) of finding the particle at a given point x is spread out like a waveform, there is no definite position of the particle. Image used with permission (public domain).
Example \(\PageIndex{1}\)
Use the normalization constraint to evaluate \(A_{\pm}\). Since the integral of \(|ψ|^2\) over all values of x from -∞ to +∞ is infinite, it appears that the wavefunction ψ cannot be normalized. We can circumvent this difficulty if we imagine the particle to be in a region of space ranging from -L to +L and consider L to approach infinity. The normalization then proceeds in the usual way as shown below. Notice that the normalization constants are real even though the wavefunctions are complex.
\[ \int \limits _{-L}^{+L} \psi ^* (x) \psi (x) dx = A_{\pm} ^* A_{\pm} \int \limits _{-L}^{L} e^{\mp ikx} e^{\pm ikx} dx = 1 \nonumber \]
\[ |A_{\pm}|^2 \int \limits _{-L}^{+L} dx = |A_{\pm}|^2 2L = 1 \nonumber \]
\[A_{\pm} = [2L]^{-1/2} \nonumber \]
Exercise \(\PageIndex{2}\)
Write the wavefunctions, ψ+ and ψ− , for the free particle, explicitly including the normalization factors found in Example 5.1.
Exercise \(\PageIndex{3}\)
Find solutions to each of the following differential equations.
\[ \dfrac {d^2 y(x)}{dx^2} + 25y(x) = 0 \nonumber\]
\[\dfrac {d^2 y(x)}{dx^2} -3y(x) = 0 \nonumber \]
A neat property of linear differential equations is that sums of solutions also are solutions, or more generally, linear combinations of solutions are solutions. A linear combination is a sum with constant coefficients where the coefficients can be positive, negative, or imaginary. For example
\[Ψ(x) = C_1ψ_+(x) + C_2ψ_−(x) \label {5-15}\]
where \(C_1\) and \(C_2\) are the constant coefficients. Inserting the functions from Equation \(\ref{5-14}\), one gets
\[\Psi (x) = \dfrac {C_1}{\sqrt {2L}} e^{+ikx} + \dfrac {C_2}{\sqrt {2L}} e^{-ikx} \label {5-16}\]
By using Euler's formula,
\[e^{\pm ikx} = \cos (kx) \pm i\sin (kx) \label {5-17}\]
Equation \(\ref{5-15}\) is transformed into
\[\Psi (x) = C\cos (kx) + D\sin (kx) \label {5-18}\]
where we see that k is just the wave vector \(\dfrac{2\pi}{\lambda}\) in the trigonometric form of the solution to the Schrödinger equation. This result is consistent with our previous discussion regarding the choice of \(k^2\) to represent \(\dfrac {2mE}{ħ^2}\).
Exercise \(\PageIndex{4}\)
Find expressions for C and D in Equation \(\ref{5-18}\) for two cases: when \(C_1 = C_2\) = +1 and when \(C_1\) = +1 and \(C_2\) = -1.
Exercise \(\PageIndex{5}\)
Verify that Equations \(\ref{5-16}\) and \(\ref{5-18}\) are solutions to the Schrödinger Equation (Equation \(\ref{5-5}\)) with the eigenvalue \(E = \dfrac {\hbar ^2 k^2 }{2m}\).
Exercise \(\PageIndex{6}\)
Demonstrate that the wavefunctions you wrote for Exercise 5.2 are eigenfunctions of the momentum operator with eigenvalues \(\hbar k\) and \(-\hbar k\).
Exercise \(\PageIndex{7}\)
Determine whether Ψ(x) in Equation \(\ref{5-16}\) is an eigenfunction of the momentum operator. See Equation (3-40) for the operator.
Exercise \(\PageIndex{8}\)
The probability density for finding the free particle at any point in the segment -L to +L can be seen by plotting ψ * ψ from -L to +L. Sketch these plots for the two wavefunctions, ψ + and ψ − , that you wrote for Exercise \(\PageIndex{2}\). Demonstrate that the area between ψ * ψ and the x-axis equals 1 for any value of L. Why must this area equal 1 even as L approaches infinity? Are all points in the space equally probable or are some positions favored by the particle?
We found wavefunctions that describe the free particle, which could be an electron, an atom, or a molecule. Each wavefunction is identified by the wave vector \(k\). A wavefunction tells us three things about the free particle: the energy of the particle, the momentum of the particle, and the probability density of finding the particle at any point. You have demonstrated these properties in Exercises \(\PageIndex{5}\) , \(\PageIndex{6}\), and \(\PageIndex{8}\). These ideas are discussed further in the following paragraphs.
We first find the momentum of a particle described by \(ψ_+(x)\). We also can say that the particle is in the state \(ψ_+(x)\). The value of the momentum is found by operating on the function with the momentum operator. Remember this problem is one-dimensional so vector quantities such as the wave vector or the momentum appear as scalars. The result is shown in Example \(\PageIndex{1}\).
Example \(\PageIndex{2}\)
Extract the momentum from the wavefunction for a free electron.
First we write the momentum operator and wavefunction as shown by I and II. The momentum operator tells us the mathematical operation to perform on the function to obtain the momentum. Complete the operation shown in II to get III, which simplifies to IV.
\[ \underset{I}{-i\hbar \dfrac {d}{dx} \psi _+ (x)} = \underset{II}{-i\hbar \dfrac {d}{dx} A _+ e^{ikx}} = \underset{III}{(-i\hbar) (ik) A_+ e^{ikx}} = \underset{IV}{\hbar k \psi _+ (x)} \nonumber \]
Example \(\PageIndex{2}\) is another way to conclude that the momentum of this particle is
\[p = ħk.\]
Here the Compton-de Broglie momentum-wavelength relation p = ħk appears from the solution to the Schrödinger equation and the definition of the momentum operator! For an electron in the state ψ − (x), we similarly find p = -ħk. This particle is moving in the minus x direction, opposite from the particle with momentum +ħk.
Since \(k = \dfrac {2 \pi}{\lambda}\), what then is the meaning of the wavelength for a particle, e.g. an electron? The wavelength is the wavelength of the wavefunction that describes the properties of the electron. We are not saying that an electron is a wave in the sense that an ocean wave is a wave; rather we are saying that a wavefunction is needed to describe the wave-like properties of the electron. Why the electron has these wave-like properties, remains a mystery.
We find the energy of the particle by operating on the wavefunction with the Hamiltonian operator as shown next in Equation \(\ref{5-19}\). Examine each step and be sure you see how the eigenvalue is extracted from the wavefunction.
\[\hat {H} \psi _{\pm} (x) = \dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2} A_{\pm} e^{\pm ikx}\]
\[ = \dfrac {-\hbar ^2}{2m} (\pm ik)^2 A_{\pm} e^{\pm ikx}\]
\[= \dfrac {\hbar ^2 k^2}{2m} A_{\pm}e^{\pm ikx} \label {5-19}\]
Notice again how the operator works on the wavefunction to extract a property of the system from it. We conclude that the energy of the particle is
\[ E = \dfrac { \hbar ^2 k^2}{2m} \label {5-20}\]
Which is just the classical relation between energy and momentum of a free particle, \(E = \dfrac {p^2}{2m}\). Note that an electron with momentum +ħk has the same energy as an electron with momentum -ħk. When two or more states have the same energy, the states and the energy level are said to be degenerate.
We have not found any restrictions on the momentum or the energy. These quantities are not quantized for the free particle because there are no boundary conditions. Any wave with any wavelength fits into an unbounded space. Quantization results from boundary conditions imposed on the wavefunction, as we saw for the particle-in-a-box.
Exercise 5.9
Describe how the wavelength of a free particle varies with the energy of the particle.
Exercise 5.10
Summarize how the energy and momentum information is contained in the wavefunction and how this information is extracted from the wavefunction.
The probability density of a free particle at a position in space \(x_0\) is
\[\psi _{\pm} ^* (x_0) \psi _{\pm} (x_0) = (2L)^{-1} e^{\mp ikx_0} e^{\pm ikx_0} = (2L)^{-1} \label {5-21}\]
From this result we see that the probability density has units of 1/m; it is the probability per meter of finding the electron at the point \(x_0\). This probability is independent of \(x_0\), the electron can be found any place along the x axis with equal probability. Although we have no knowledge of the position of the electron, we do know the electron momentum exactly. This relationship between our knowledge of position and momentum is a manifestation of the Heisenberg Uncertainty Principle, which says that as the uncertainty in one quantity is reduced, the uncertainty in another quantity increases. For this case, we know the momentum exactly and have no knowledge of the position of the particle. The uncertainty in the momentum is zero; the uncertainty in the position is infinite. |
f95c8d08d54251da | Chemistry from First Principles by Jan C. A. Boeyens
By Jan C. A. Boeyens
The publication comprises components: A precis and demanding exam of chemical thought because it built from early beginnings during the dramatic occasions of the 20 th century, and a reconstruction in keeping with a re-interpretation of the 3 seminal theories of periodicity, relativity and quantum mechanics in chemical context.
Anticipating the ultimate end that subject and effort are distinctive configurations of space-time, the research starts off with the subject of relativity, the single concept that has an instantaneous referring to the topology of space-time and which demonstrates the equivalence of strength and topic and a reciprocal dating among topic and the curvature of space.
Re-examination of the 1st quantitative version of the atom, proposed via Bohr, unearths that this idea was once deserted sooner than it had obtained the eye it deserved. It supplied a typical clarification of the Balmer formulation that firmly tested quantity as a primary parameter in technological know-how, rationalized the interplay among radiation and topic, outlined the unit of digital magnetism and produced the fine-structure consistent. those should not unintended achievements and in transforming the version it truly is proven, in spite of everything, to be suitable with the idea of angular momentum, at the foundation of which it was once first rejected with unbecoming haste.
The Sommerfeld extension of the Bohr version used to be in line with extra common quantization ideas and, even if extra winning on the time, is confirmed to have brought the pink herring of tetrahedrally directed elliptic orbits, which nonetheless haunts so much types of chemical bonding. The gestation interval among Bohr and the formula of quantum mechanics was once ruled by way of the invention and popularity of wave phenomena in theories of topic, to the level that each one formulations of the quantum conception constructed from an analogous classical-mechanical historical past and the Hamiltonian description of multiply-periodic platforms. the explanations for the fierce debates at the interpretation of phenomena resembling quantum jumps and wave versions of the atom are mentioned within the context of later advancements. The winning, yet unreasonable, suppression of the Schrodinger, Madelung and Bohm interpretations of quantum thought is proven to not have served chemistry good. The inflated claims approximately distinctiveness of quantum platforms created a mystique that keeps to frighten scholars of chemistry. Unreasonable versions of electrons, atoms and molecules have alienated chemists from their roots, paying lip provider to borrowed strategies comparable to size difficulties, quantum uncertainty, loss of truth, quantum good judgment, chance density and different ghostlike phenomena with none relevance in chemistry. actually, classical and non-classical platforms are heavily associated via options similar to wave movement, quantum strength and dynamic variables.
The moment a part of the ebook re-examines the conventional innovations of chemistry opposed to the heritage of actual theories tailored for chemistry. an alternate thought is formulated from the popularity that the techniques of chemistry occur in crowded environments that advertise activated states of topic. Compressive activation, modelled through the equipment of Hartree-Fock-Slater atomic constitution simulation, ends up in an figuring out of elemental periodicity, the electronegativity functionality and covalence as a manifestation of space-time constitution and the golden ratio. Molecular constitution and form are concerning orbital angular momentum and chemical switch is proven to be dictated by way of the quantum capability. The empirical parameters utilized in computing device simulations corresponding to molecular mechanics and dynamics are proven to derive in a primary manner from the connection among covalence and the golden ratio, which additionally explains the actual foundation of Pauli’s exclusion precept for the 1st time.
Show description
Read Online or Download Chemistry from First Principles PDF
Best quantum theory books
Introduction to the theory of quantized fields
During this version we have now rewritten the chapters that debate the equipment of continuing integration and the renormalization team, that are issues in thought that experience turn into vitally important in recent times. we have now additionally transformed and supplemented the sections at the entire eco-friendly services.
Quantum inverse scattering method and correlation functions
The quantum inverse scattering process is a way of discovering certain ideas of two-dimensional versions in quantum box thought and statistical physics (such because the sine-Gordon equation or the quantum nonlinear Schrödinger equation). This creation to this crucial and fascinating quarter first offers with the Bethe ansatz and calculation of actual amounts.
A First Course in Group Theory
One of many problems in an introductory booklet is to speak a feeling of objective. purely too simply to the newbie does the ebook develop into a chain of definitions, suggestions, and effects which look little greater than curiousities top nowhere specifically. during this publication i've got attempted to beat this challenge by means of making my important goal the selection of all attainable teams of orders 1 to fifteen, including a few examine in their constitution.
Factorization Method in Quantum Mechanics
This publication introduces the factorization technique in quantum mechanics at a sophisticated point, with the purpose of placing mathematical and actual innovations and strategies just like the factorization process, Lie algebras, matrix parts and quantum regulate on the reader’s disposal. For this function, the textual content offers a finished description of the factorization process and its large purposes in quantum mechanics which enhances the conventional insurance present in quantum mechanics textbooks.
Additional info for Chemistry from First Principles
Sample text
12) it follows that me c 1 1 − ps pX = 1 − cos θ As photon momentum p = E/c, the quantum assumption E = hν implies that p = hν/c = h/λ. This relationship between mechanical momentum and wavelength is an example of electromagnetic wave-particle duality. It reduces the Compton equation into: me c (λs − λX ) = 1 − cos θ h or h (1 − cos θ) me c This expression confirms that the experimental results can only be explained by treating X-rays as consisting of photons with energy E = hν and momentum p = h/λ.
THE IMPORTANT CONCEPTS equilibrium value. The moment of inertia of a ring of particles, mrk2 , was used as the criterion for stability to define a closed orbit that combines circular motion with simple harmonic displacements. A more general discussion that substantiates the derivation is given by Goldstein [11]. The frequency of revolution is obtained in the form of a square root, defined by a set of integers, ±ν = ωo + n2 A + n4 B . . (A < 0 , B > 0 , n = 1, 2, 3 . . ) interpreted by Nagaoka to show that ′′ waves of equal frequency travel round the ring in opposite senses, so long as the particles are not acted upon by extraneous forces′′ .
Still, it cannot be accidental that wave packets have so many properties in common with quantum-mechanical particles and maybe the concept was abandoned prematurely. What it lacks is a mechanism to account for the appearance of mass, charge and spin, but this may not be an insurmountable problem. It is tempting to associate the rapidly oscillating component with the Compton wavelength and relativistic motion within the electronic wave packet. 5 Matter Waves It is of more than passing interest to note that de Broglie’s relationship always leads to the Sommerfeld quantization rules.
Download PDF sample
Rated 4.17 of 5 – based on 34 votes |
667ac630647b1e1c | The weird force
In my previous post (Loose Ends), I mentioned the weak force as the weird force. Indeed, unlike photons or gluons (i.e. the presumed carriers of the electromagnetic and strong force respectively), the weak force carriers (W bosons) have (1) mass and (2) electric charge:
1. W bosons are very massive. The equivalent mass of a W+ and W– boson is some 86.3 atomic mass units (amu): that’s about the same as a rubidium or strontium atom. The mass of a Z boson is even larger: roughly equivalent to the mass of a molybdenium atom (98 amu). That is extremely heavy: just compare with iron or silver, which have a mass of about 56 amu and 108 amu respectively. Because they are so massive, W bosons cannot travel very far before disintegrating (they actually go (almost) nowhere), which explains why the weak force is very short-range only, and so that’s yet another fundamental difference as compared to the other fundamental forces.
2. The electric charge of W and Z bosons explains why we have a trio of weak force carriers rather than just one: W+, W– and Z0. Feynman calls them “the three W’s”.
The electric charge of W and Z bosons is what it is: an electric charge – just like protons and electrons. Hence, one has to distinguish it from the the weak charge as such: the weak charge (or, to be correct, I should say the weak isospin number) of a particle (such as a proton or a neutron for example) is related to the propensity of that particle to interact through the weak force — just like the electric charge is related to the propensity of a particle to interact through the electromagnetic force (think about Coulomb’s law for example: likes repel and opposites attract), and just like the so-called color charge (or the (strong) isospin number I should say) is related to the propensity of quarks (and gluons) to interact with each other through the strong force.
In short, as compared to the electromagnetic force and the strong force, the weak force (or Fermi’s interaction as it’s often called) is indeed the odd one out: these W bosons seem to mix just about everything: mass, charge and whatever else. In his 1985 Lectures on Quantum Electrodynamics, Feynman writes the following about this:
“The observed coupling constant for W’s is much the same as that for the photon. Therefore, the possibility exists that the three W’s and the photon are all different aspects of the same thing. Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called ‘the weak interactions’ into one quantum theory, and they did it. But if you look at the results they get, you can see the glue—so to speak. It’s very clear that the photon and the three W’s are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly—you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes more beautiful and, therefore, probably more correct.” (Feynman, 1985, p. 142)
Well… That says it all, I think. And from what I can see, the (tentative) confirmation of the existence of the Higgs field has not made these ‘seams’ any less visible. However, before criticizing eminent scientists such as Weinberg and Salam, we should obviously first have a closer look at those W bosons without any prejudice.
Alpha decay, potential wells and quantum tunneling
The weak force is usually explained as the force behind a process referred to as beta decay. However, because beta decay is just one form of radioactive decay, I need to say something about alpha decay too. [There is also gamma decay but that’s like a by-product of alpha and beta decay: when a nucleus emits an α or β particle (i.e. when we have alpha or beta decay), the nucleus will usually be left in an excited state, and so it can then move to a lower energy state by emitting a gamma ray photon (gamma radiation is very hard (i.e. very high-energy) radiation) – in the same way that an atomic electron can jump to a lower energy state by emitting a (soft) light ray photon. But so I won’t talk about gamma decay.]
Atomic decay, in general, is a loss of energy accompanying a transformation of the nucleus of the atom. Alpha decay occurs when the nucleus ejects an alpha particle: an α-particle consist of two protons and two neutrons bound together and, hence, it’s identical to a helium nucleus. Alpha particles are commonly emitted by all of the larger radioactive nuclei, such as uranium (which becomes thorium as a result of the decay process), or radium (which becomes radon gas). However, alpha decay is explained by a mechanism not involving the weak force: the electromagnetic force and the nuclear force (i.e. the strong force) will do. The reasoning is as follows: the alpha particle can be looked at as a stable but somewhat separate particle inside the nucleus. Because of their charge (both positive), the alpha particle inside of the nucleus and ‘the rest of the nucleus’ are subject to strong repulsive electromagnetic forces between them. However, these strong repulsive electromagnetic forces are not as strong as the strong force between the quarks that make up matter and, hence, that’s what keeps them together – most of the time that is.
Let me be fully complete here. The so-called nuclear force between composite particles such as protons and neutrons – or between clusters of protons and neutrons in this case – is actually the residual effect of the strong force. The strong force itself is between quarks – and between them only – and so that’s what binds them together in protons and neutrons (so that’s the next level of aggregation you might say). Now, the strong force is mostly neutralized within those protons and neutrons, but there is some residual force, and so that’s what keeps a nucleus together and what is referred to as the nuclear force.
There is a very helpful analogy here: the electromagnetic forces between neutral atoms (and/or molecules)—referred to as van der Waals forces (that’s what explains the liquid shape of water, among other things)— are also the residual of the (much stronger) electromagnetic forces that tie the electrons to the nucleus.
Now, that residual strong force – i.e. the nuclear force – diminishes in strength with distance but, within a certain distance, that residual force is strong enough to do what it does, and that’s to keep the nucleus together. This stable situation is usually depicted by what is referred to as a potential well:
Potential wellThe name is obvious: a well is a hole in the ground from which you can get water (or oil or gas or whatever). Now, the sea level might actually be lower than the bottom of a well, but the water would still stay in the well. In the illustration above, we are not depicting water levels but energy levels, but it’s equally obvious it would require some energy to kick a particle out of this well: if it would be water, we’d require a pump to get it out but, of course, it would be happy to flow to the sea once it’s out. Indeed, once a charged particle would be out (I am talking our alpha particle now), it will obviously stay out because of the repulsive electromagnetic forces coming into play (positive charges reject each other).
But so how can it escape the nuclear force and go up on the side of the well? [A potential pond or lake would have been a better term – but then that doesn’t sound quite right, does it? :-)]
Well, the energy may come from outside – that’s what’s referred to as induced radioactive decay (just Google it and you will tons of articles on experiments involving laser-induced accelerated alpha decay) – or, and that’s much more intriguing, the Uncertainty Principle comes into play.
Huh? Yes. According to the Uncertainty Principle, the energy of our alpha particle inside of the nucleus wiggles around some mean value but our alpha particle would also have an amplitude to have some higher energy level. That results not only in a theoretical probability for it to escape out of the well but also into something actually happening if we wait long enough: the amplitude (and, hence, the probability) is tiny, but it’s what explains the decay process – and what gives U-232 a half-life of 68.9 years, and also what gives the more common U-238 a much more comfortable 4.47 billion years as the half-life period.
Now that we’re talking about wells and all that, we should also mention that this phenomenon of getting out of the well is referred to as quantum tunneling. You can easily see why: it’s like the particle dug its way out. However, it didn’t: instead of digging under the sidewall, it sort of ‘climbed over’ it. Think of it being stuck and trying and trying and trying – a zillion times – to escape, until it finally did. So now you understand this fancy word: quantum tunneling. However, this post is about the weak force and so let’s discuss beta decay now.
Beta decay and intermediate vector bosons
Beta decay also involves transmutation of nuclei, but not by the emission of an α-particle but by a β-particle. A beta particle is just a different name for an electron (β) and/or its anti-matter counterpart: the positron (β+). [Physicists usually simplify stuff but in this case, they obviously didn’t: why don’t they just write e and ehere?]
An example of β decay is the decay of carbon-14 (C-14) into nitrogen-14 (N-14), and an example of β+ decay is the decay of magnesium-23 into sodium-23. C-14 and N-14 have the same mass but they are different atoms. The decay process is described by the equations below:
Beta decay
You’ll remember these formulas from your high-school days: beta decay does not change the mass number (carbon and nitrogen have the same mass: 14 units) but it does change the atomic (or proton) number: nitrogen has an extra proton. So one of the neutrons became a proton ! [The second equation shows the opposite: a proton became a neutron.] In order to do that, the carbon atom had to eject a negative charge: that’s the electron you see in the equation above.
In addition, there is also the ejection of a anti-neutrino (that’s what the bar above the ve symbol stands for: antimatter). You’ll wonder what an antineutrino could possibly be. Don’t worry about it: it’s not any spookier than the neutrino. Neutrinos and anti-neutrinos have no electric charge and so you cannot distinguish them on that account (electric charge). However, all antineutrinos have right-handed helicity (i.e. they come in only one of the two possible spin states), while the neutrinos are all left-handed. That’s why beta-decay is said to not respect parity symmetry, aka as mirror symmetry. Hence, in the case of beta decay, Nature does distinguish between the world and the mirror world ! I’ll come back on that but let me first lighten up the discussion somewhat with a graphical illustration of that neutron-proton transformation.
As for magnesium-sodium transformation, we’d have something similar but so we’d just have a positron instead of an electron (a positron is just an electron with a positive charge for all practical purposes) and a regular neutrino. So we’d just have the anti-matter counterparts of the electron and the neutrino. [Don’t be put off by the term ‘anti-matter’: anti-matter is really just like regular matter – except that the charges have opposite sign. For example, the anti-matter counterpart of a blue quark is an anti-blue quark, and the anti-matter counterpart of neutrino has right-handed helicity – or spin – as opposed to the ‘left-handed’ ‘ordinary’ neutrinos.]
Now, you surely will have several serious questions. The most obvious question is what happens with the electron and the neutrino? Well… Those spooky neutrinos are gone before you know it and so don’t worry about them. As for the electron, the carbon had only six electrons but the nitrogen needs seven to be electrically neutral… So you might think the new atom will take care of it. Well… No. Sorry. Because of its kinetic energy, the electron is likely to just explore the world and crash into something else, and so we’re left with a positively charged nitrogen ion indeed. So I should have added a little + sign next to the N in the formula above. Of course, one cannot exclude the possibility that this ion will pick up the electron later – but don’t bet on it: the ion might have to absorb another electron, or not find any free electrons !
As for the positron (in a β+ decay), that will just grab the nearest electron around and auto-destruct—thereby generating two high-energy photons (so that’s a little light flash). The net result is that we do not have an ion but a neutral sodium atom. Because the nearest electron will usually be found on some shell around the nucleus (the K or L shell for example), such process is often described as electron capture, and the ‘transformation equation’ can then be written p + e– → n + ve (with p and n denoting a proton and a neutron respectively).
The more important question is: where are the W and Z bosons in this story?
Ah ! Yes! Sorry I forgot about them. The Feynman diagram below shows how it really works—and why the name of intermediate vector bosons for these three strange ‘particles’ (W+, W, and Z0) is so apt. These W bosons are just a short trace of ‘something’ indeed: their half-life is about 3×10−25 s, and so that’s the same order of magnitude (or minitude I should say) as the mean lifetime of other resonances observed in particle collisions.
Feynman diagram beta decay
Indeed, you’ll notice that, in this so-called Feynman diagram, there’s no space axis. That’s because the distances involved are so tiny that we have to distort the scale—so we are not using equivalent time and distance units here, as Feynman diagrams should. That’s in line with a more prosaic description of what may be happening: W bosons mediate the weak force by seemingly absorbing an awful lot of momentum, spin, and whatever other energy related to all of the qubits describing the particles involved, to then eject an electron (or positron) and a neutrino (or an anti-neutrino).
Hmm… That’s not a standard description of a W boson as a force carrying particle, you’ll say. You’re right. This is more the description of a Z boson. What’s the Z boson again? Well… I haven’t explained it yet. It’s not involved in beta decay. There’s a process called elastic scattering of neutrinos. Elastic scattering means that some momentum is exchanged but neither the target (an electron or a nucleus) nor the incident particle (the neutrino) are affected as such (so there’s no break-up of the nucleus for example). In other words, things bounce back and/or get deflected but there’s no destruction and/or creation of particles, which is what you would have with inelastic collisions. Let’s examine what happens here.
W and Z bosons in neutrino scattering experiments
It’s easy to generate neutrino beams: remember their existence was confirmed in 1956 because nuclear reactors create a huge flux of them ! So it’s easy to send lots of high-energy neutrinos into a cloud or bubble chamber and see what happens. Cloud and bubble chambers are prehistoric devices which were built and used to detect electrically charged particles moving through it. I won’t go into too much detail but I can’t resist inserting a few historic pictures here.
The first two pictures below document the first experimental confirmation of the existence of positrons by Carl Anderson, back in 1932 (and, no, he’s not Danish but American), for which he got a Nobel Prize. The magnetic field which gives the positron some curvature—the trace of which can be seen in the image on the right—is generated by the coils around the chamber. Note the opening in the coils, which allows for taking a picture when the supersaturated vapor is suddenly being decompressed – and so the charged particle that goes through it leaves a trace of ionized atoms behind that act as ‘nucleation centers’ around which the vapor condenses, thereby forming tiny droplets. Quite incredible, isn’t it? One can only admire the perseverance of these early pioneers.
Carl Anderson Positron
The picture below is another historical first: it’s the first detection of a neutrino in a bubble chamber. It’s fun to analyze what happens here: we have a mu-meson – aka as a muon – coming out of the collision here (that’s just a heavier version of the electron) and then a pion – which should (also) be electrically charged because the muon carries electric charge… But I will let you figure this one out. I need to move on with the main story. 🙂
The point to note is that these spooky neutrinos collide with other matter particles. In the image above, it’s a proton, but so when you’re shooting neutrino beams through a bubble chamber, a few of these neutrinos can also knock electrons out of orbit, and so that electron will seemingly appear out of nowhere in the image and move some distance with some kinetic energy (which can all be measured because magnetic fields around it will give the electron some curvature indeed, and so we can calculate its momentum and all that).
Of course, they will tend to move in the same direction – more or less at least – as the neutrinos that knocked them loose. So it’s like the Compton scattering which we discussed earlier (from which we could calculate the so-called classical radius of the electron – or its size if you will)—but with one key difference: the electrons get knocked loose not by photons, but by neutrinos.
But… How can they do that? Photons carry the electromagnetic field so the interaction between them and the electrons is electromagnetic too. But neutrinos? Last time I checked, they were matter particles, not bosons. And they carry no charge. So what makes them scatter electrons?
You’ll say that’s a stupid question: it’s the neutrino, dummy ! Yes, but how? Well, you’ll say, they collide—don’t they? Yes. But we are not talking tiny billiard balls here: if particles scatter, one of the fundamental forces of Nature must be involved, and usually it’s the electromagnetic force: it’s the electron density around nuclei indeed that explains why atoms will push each other away if they meet each other and, as explained above, it’s also the electromagnetic force that explains Compton scattering. So billiard balls bounce back because of the electromagnetic force too and…
OK-OK-OK. I got it ! So here it must be the strong force or something. Well… No. Neutrinos are not made of quarks. You’ll immediately ask what they are made of – but the answer is simple: they are what they are – one of the four matter particles in the Standard Model – and so they are not made of anything else. Capito?
OK-OK-OK. I got it ! It must be gravity, no? Perhaps these neutrinos don’t really hit the electron: perhaps they skim near it and sort of drag it along as they pass? No. It’s not gravity either. It can’t be. We have no exact measurement of the mass of a neutrino but it’s damn close to zero – and, hence, way too small to exert any such influence on an electron. It’s just not consistent with those traces.
OK-OK-OK. I got it ! It’s that weak force, isn’t it? YES ! The Feynman diagrams below show the mechanism involved. As far as terminology goes (remember Feynman’s complaints about the up, down, strange, charm, beauty and truths quarks?), I think this is even worse. The interaction is described as a current, and when the neutral Z boson is involved, it’s called a neutral current – as opposed to… Well… Charged currents. Neutral and charged currents? That sounds like sweet and sour candy, isn’t it? But isn’t candy supposed to be sweet? Well… No. Sour candy is pretty common too. And so neutral currents are pretty common too.
You obviously don’t believe a word of what I am saying and you’ll wonder what the difference is between these charged and neutral currents. The end result is the same in the first two pictures: an electron and a neutrino interact, and they exchange momentum. So why is one current neutral and the other charged? In fact, when you ask that question, you are actually wondering whether we need that neutral Z boson. W bosons should be enough, no?
No. The first and second picture are “the same but different”—and you know what that means in physics: it means it’s not the same. It’s different. Full stop. In the second picture, there is electron absorption (only for a very brief moment obviously, but so that’s what it is, and you don’t have that in the first diagram) and then electron emission, and there’s also neutrino absorption and emission. […] I can sense your skepticism – and I actually share it – but that’s what I understand of it !
[…] So what’s the third picture? Well… That’s actually beta decay: a neutron becomes a proton, and there’s emission of an electron and… Hey ! Wait a minute ! This is interesting: this is not what we wrote above: we have an incoming neutrino instead of an outgoing anti-neutrino here. So what’s this?
Well… I got this illustration from a blog on physics (Galileo’s Pendulum – The Flavor of Neutrinos) which, in turn, mentions Physics Today as its source. The incoming neutrino has nothing to do with the usual representation of an anti-matter particle as a particle traveling backwards in time. It’s something different, and it triggers a very interesting question: could beta decay possibly be ‘triggered’ by neutrinos? Who knows?
I googled it, and there seems to be some evidence supporting such thesis. However, this ‘evidence’ is flimsy (the only real ‘clue’ is that the activity of the Sun, as measured by the intensity of solar flares, seems to be having some (tiny) impact on the rate of decay of radioactive elements on Earth) and, hence, most ‘serious’ scientists seem to reject that possibility. I wonder why: it would make the ‘weird force’ somewhat less weird in my view. So… What to say? Well… Nothing much at this moment. Let me move on and examine the question a bit more in detail in a Post Scriptum.
The odd one out
You may wonder if neutrino-electron interaction always involve the weak force. The answer to that question is simple: Yes ! Because they do not carry any electric charge, and because they are not quarks, neutrinos are only affected by the weak force. However, as evidenced by all the stuff I wrote on beta decay, you cannot turn this statement on its head: the weak force is relevant not only for neutrinos but for electrons and quarks as well ! That gives us the following connection between forces and matter:
forces and matter
[Specialists reading this post may say they’ve not seen this diagram before. That might be true. I made it myself – for a change – but I am sure it’s around somewhere.]
It is a weird asymmetry: almost massless particles (neutrinos) interact with other particles through massive bosons, and these massive ‘things’ are supposed to be ‘bosons’, i.e. force carrying particles ! These physicists must be joking, right? These bosons can hardly carry themselves – as evidenced by the fact they peter out just like all of those other ‘resonances’ !
Hmm… Not sure what to say. It’s true that their honorific title – ‘intermediate vectors’ – seems to be quite apt: they are very intermediate indeed: they only appear as a short-lived stage in between the initial and final state of the system. Again, it leads one to think that these W bosons may just reflect some kind of energy blob caused by some neutrino – or anti-neutrino – crashing into another matter particle (a quark or an electron). Whatever it is, this weak force is surely the odd one out.
Odd one out
In my previous post, I mentioned other asymmetries as well. Let’s revisit them.
Time irreversibility
In Nature, uranium is usually found as uranium-238. Indeed, that’s the most abundant isotope of uranium: about 99.3% of all uranium is U-238. There’s also some uranium-235 out there: some 0.7%. And there are also trace amounts of U-234. And that’s it really. So where is the U-232 we introduced above when talking about alpha decay? Well… We said it has a half-life of 68.9 years only and so it’s rather normal U-232 cannot be found in Nature. What? Yes: 68.9 years is nothing compared to the half-life of U-238 (4.47 billion years) or U-235 (704 million years), and so it’s all gone. In fact, the tiny proportion of U-235 on this Earth is what allows us to date the Earth. The math and physics involved resemble the math and physics involved in carbon-dating but so carbon-dating is used for organic materials only, because the carbon-14 that’s used also has a fairly short half-time: 5,730 years—so that’s like a thousand times more than U-232 but… Well… Not like millions or billions of years. [You’ll immediately ask why this C-14 is still around if it’s got such a short life-time. The answer to that is easy: C-14 is continually being produced in the atmosphere and, hence, unlike U-232, it doesn’t just disappear.]
Hmm… Interesting. Radioactive decay suggests time irreversibility. Indeed, it’s wonderful and amazing – but sad at the same time:
1. There’s so much diversity – a truly incredible range of chemical elements making life what it is.
2. But so all these chemical elements have been produced through a process of nuclear fusion in stars (stellar nucleosynthesis), which were then blasted into space by supernovae, and so they then coagulated into planets like ours.
3. However, all of the heavier atoms will decay back into some lighter element because of radioactive decay, as shown in the graph below.
4. So we are doomed !
Overview of decay modes
In fact, some of the GUT theorists think that there is no such thing as ‘stable nuclides’ (that’s the black line in the graph above): they claim that all atomic species will decay because – according to their line of reasoning – the proton itself is NOT stable.
WHAT? Yeah ! That’s what Feynman complained about too: he obviously doesn’t like these GUT theorists either. Of course, there is an expensive experiment trying to prove spontaneous proton decay: the so-called Super-K under Mount Kamioka in Japan. It’s basically a huge tank of ultra-pure water with a lot of machinery around it… Just google it. It’s fascinating. If, one day, it would be able to prove that there’s proton decay, our Standard Model would be in very serious problems – because it doesn’t cater for unstable protons. That being said, I am happy that has not happened so far – because it would mean our world would really be doomed.
What do I mean with that? We’re all doomed, aren’t we? If only because of the Second Law of Thermodynamics. Huh? Yes. That ‘law’ just expresses a universal principle: all kinetic and potential energy observable in nature will, in the end, dissipate: differences in temperature, pressure, and chemical potential will even out. Entropy increases. Time is NOT reversible: it points in the direction of increasing entropy – till all is the same once again. Sorry?
Don’t worry about it. When everything is said and done, we humans – or life in general – are an amazing negation of the Second Law of Thermodynamics: temperature, pressure, chemical potential and what have you – it’s all super-organized and super-focused in our body ! But it’s temporary indeed – and we actually don’t negate the Second Law of Thermodynamics: we create order by creating disorder. In any case, I don’t want to dwell on this point. Time reversibility in physics usually refers to something else: time reversibility would mean that all basic laws of physics (and with ‘basic’, I am excluding this higher-level Second Law of Thermodynamics) would be time-reversible: if we’d put in minus t (–t) instead of t, all formulas would still make sense, wouldn’t they? So we could – theoretically – reverse our clock and stopwatches and go back in time.
Can we do that?
Well… We can reverse a lot. For example, U-232 decays into a lot of other stuff BUT we can produce U-232 from scratch once again—from thorium to be precise. In fact, that’s how we got it in the first place: as mentioned above, any natural U-232 that might have been produced in those stellar nuclear fusion reactors is gone. But so that means that alpha decay is reversible: we’re producing stable stuff – U-232 lasts for dozens of years – that probably existed long time ago but so it decayed and now we’re reversing the arrow of time using our nuclear science and technology.
Now, you may object that you don’t see Nature spontaneously assemble the nuclear technology we’re using to produce U-232, except if Nature would go for that Big Crunch everyone’s predicting so it can repeat the Big Bang once again (so that’s the oscillating Universe scenario)—and you’re obviously right in that assessment. That being said, from some kind of weird existential-philosophical point of view, it’s kind of nice to know that – in theory at least – there is time reversibility indeed (or T symmetry as it’s called by scientists).
What? That’s right. For beta decay, we don’t have T symmetry. The weak force breaks all kinds of symmetries, and time symmetry is only one of them. I talked about these in my previous post (Loose Ends) – so please have a look at that, and let me just repeat the basics:
1. Parity (P) symmetry or mirror symmetry revolves around the notion that Nature should not distinguish between right- and left-handedness, so everything that works in our world, should also work in the mirror world. Now, the weak force does not respect P symmetry: we need right-handed neutrinos for β decay, and we’d also need right-handed neutrinos to reverse the process – which actually happens: so, yes, beta decay might be time-reversible but so it doesn’t work with left-handed neutrinos – which is what our ‘right-handed’ neutrinos would be in the ‘mirror world’. Full stop. Our world is different from the mirror world because the weak force knows the difference between left and right – and some stuff only works with left-handed stuff (and then some other stuff only works with right-handed stuff). In short, the weak force doesn’t work the same in the mirror world. In the mirror world, we’d need to throw in left-handed neutrinos for β decay. Not impossible but a bit of a nuisance, you’ll agree.
2. Charge conjugation or charge (C) symmetry revolves around the notion that a world in which we reverse all (electric) charge signs. Now, the weak force also does not respect C symmetry. I’ll let you go through the reasoning for that, but it’s the same really. Just reversing all signs would not make the weak force ‘work’ in the mirror world: we’d have to ‘keep’ some of the signs – notably those of our W bosons !
3. Initially, it was thought that the weak force respected the combined CP symmetry (and, therefore, that the principle of P and C symmetry could be substituted by a combined CP symmetry principle) but two experimenters – Val Fitch and James Cronin – got a Nobel Prize when they proved that this was not the case. To be precise, the spontaneous decay of neutral kaons (which is a type of decay mediated by the weak force) does not respect CP symmetry. Now, that was the death blow to time reversibility (T symmetry). Why? Can’t we just make a film of those experiments not respecting P, C or CP symmetry, and then just press the ‘reverse’ button? We could but one can show that the relativistic invariance in Einstein’s relativity theory implies a combined CPT symmetry. Hence, if CP is a broken symmetry, then the T symmetry is also broken. So we could play that film, but the laws of physics would not make sense ! In other words, the weak force does not respect T symmetry either !
To summarize this rather lengthy philosophical digression: a full CPT sequence of operations would work. So we could – in sequence – (1) change all particles to antiparticles (C), (2) reflect the system in a mirror (P), and (3) change the sign of time (T), and we’d have a ‘working’ anti-world that would be just as real as ours. HOWEVER, we do not live in a mirror world. We live in OUR world – and so left-handed is left-handed, and right-handed is right-handed, and positive is positive and negative is negative, and so THERE IS NO TIME REVERSIBILITY: the weak force does not respect T symmetry.
Do you understand now why I call the weak force the weird force? Penrose devotes a whole chapter to time reversibility in his Road to Reality, but he does not focus on the weak force. I wonder why. All that rambling on the Second Law of Thermodynamics is great – but one should relate that ‘principle’ to the fundamental forces and, most notably, to the weak force.
Post scriptum 1:
In one of my previous posts, I complained about not finding any good image of the Higgs particle. The problem is that these super-duper particle accelerators don’t use bubble chambers anymore. The scales involved have become incredibly small and so all that we have is electronic data, it seems, and that is then re-assembled into some kind of digital image but – when everything is said and done – these images are only simulations. Not the real thing. I guess I am just an old grumpy guy – a 45-year old economist: what do you expect? – but I’ll admit that those black-and-white pictures above make my heart race a bit more than those colorful simulations. But so I found a good simulation. It’s the cover image of Wikipedia’s Physics beyond the Standard Model (I should have looked there in the first place, I guess). So here it is: the “simulated Large Hadron Collider CMS particle detector data depicting a Higgs boson (produced by colliding protons) decaying into hadron jets and electrons.”
CMS_Higgs-event (1)
So that’s what gives mass to our massive W bosons. The Higgs particle is a massive particle itself: an estimated 125-126 GeV/c2, so that’s about 1.5 times the mass of the W bosons. I tried to look into decay widths and all that, but it’s all quite confusing. In short, I have no doubt that the Higgs theory is correct – the data is all what we have and then, when everything is said and done, we have an honorable Nobel Prize Committee thinking the evidence is good enough (which – in light of their rather conservative approach (which I fully subscribe too: don’t get me wrong !) – usually means that it’s more than good enough !) – but I can’t help thinking this is a theory which has been designed to match experiment.
Wikipedia writes the following about the Higgs field:
“The Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W– and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realized as) the massive Higgs boson.”
Hmm… So we assign some qubits to W bosons (sorry for the jargon: I am talking these ‘longitudinal third-polarization components’ here), and to W bosons only, and then we find that the Higgs field gives mass to these bosons only? I might be mistaken – I truly hope so (I’ll find out when I am somewhat stronger in quantum-mechanical math) – but, as for now, it all smells somewhat fishy to me. It’s all consistent, yes – and I am even more skeptical about GUT stuff ! – but it does look somewhat artificial.
But then I guess this rather negative appreciation of the mathematical beauty (or lack of it) of the Standard Model is really what is driving all these GUT theories – and so I shouldn’t be so skeptical about them ! 🙂
Oh… And as I’ve inserted some images of collisions already, let me insert some more. The ones below document the discovery of quarks. They come out of the above-mentioned coffee table book of Lederman and Schramm (1989). The accompanying texts speak for themselves.
Quark - 1
Quark - 2
Quark - 3
Post scriptum 2:
I checked the source of that third diagram showing how an incoming neutrino could possibly cause a neutron to become a proton. It comes out of the August 2001 issue of Physics Today indeed, and it describes a very particular type of beta decay. This is the original illustration:
inverse beta decay
The article (and the illustration above) describes how solar neutrinos traveling through heavy water – also known as deuterium – can interact with the deuterium nucleus – which is referred to as deuteron, and which we’ll represent by the symbol d in the process descriptions below. The nucleus of deuterium – which is an isotope of hydrogen – consists of one proton and one neutron, as opposed to the much more common protium isotope of hydrogen, which has just one proton in the nucleus. Deuterium occurs naturally (0.0156% of all hydrogen atoms in the Earth’s oceans is deuterium), but it can also be produced industrially – for use in heavy-water nuclear reactors for example. In any case, the point is that deuteron can respond to solar neutrinos by breaking up in one of two ways:
1. Quasi-elastically: ve + d → ve + p + n. So, in this case, the deuteron just breaks up in its two components: one proton and one neutron. That seems to happen pretty frequently because the nuclear forces holding the proton and the neutron together are pretty weak it seems.
2. Alternatively, the solar neutrino can turn a deuteron’s neutron into a second proton, and so that’s what’s depicted in the third diagram above: ve + d → e + p + p. So what happens really is ve + n → e + p.
The author of this article – which basically presents the basics of how a new neutrino detector – the Sudbury Neutrino Observatory – is supposed to work – refers to the second process as inverse beta decay – but that’s a rather generic and imprecise term it seems. The conclusion is that the weak force seems to have myriad ways of expressing itself. However, the connection between neutrinos and the weak force seems to need further exploring. As for myself, I’d like to know why the hypothesis that any form of beta decay – or, for that matter, any other expression of the weak force – is actually being triggered by these tiny neutrinos crashing into (other) matter particles would not be reasonable.
In such scenario, the W bosons would be reduced to a (very) temporary messy ‘blob’ of energy, combining kinetic, electromagnetic as well as the strong binding energy between quarks if protons and neutrons are involved. Could this ‘odd one out’ be nothing but a pseudo-force? I am no doubt being very simplistic here – but then it’s an interesting possibility, isn’t it? In order to firmly deny it, I’ll need to learn a lot more about neutrinos no doubt – and about how the results of all these collisions in particle accelerators are actually being analyzed and interpreted.
Loose ends…
It looks like I am getting ready for my next plunge into Roger Penrose’s Road to Reality. I still need to learn more about those Hamiltonian operators and all that, but I can sort of ‘see’ what they are supposed to do now. However, before I venture off on another series of posts on math instead of physics, I thought I’d briefly present what Feynman identified as ‘loose ends’ in his 1985 Lectures on Quantum Electrodynamics – a few years before his untimely death – and then see if any of those ‘loose ends’ appears less loose today, i.e. some thirty years later.
The three-forces model and coupling constants
All three forces in the Standard Model (the electromagnetic force, the weak force and the strong force) are mediated by force carrying particles: bosons. [Let me talk about the Higgs field later and – of course – I leave out the gravitational force, for which we do not have a quantum field theory.]
Indeed, the electromagnetic force is mediated by the photon; the strong force is mediated by gluons; and the weak force is mediated by W and/or Z bosons. The mechanism is more or less the same for all. There is a so-called coupling (or a junction) between a matter particle (i.e. a fermion) and a force-carrying particle (i.e. the boson), and the amplitude for this coupling to happen is given by a number that is related to a so-called coupling constant
Let’s give an example straight away – and let’s do it for the electromagnetic force, which is the only force we have been talking about so far. The illustration below shows three possible ways for two electrons moving in spacetime to exchange a photon. This involves two couplings: one emission, and one absorption. The amplitude for an emission or an absorption is the same: it’s –j. So the amplitude here will be (–j)(j) = j2. Note that the two electrons repel each other as they exchange a photon, which reflects the electromagnetic force between them from a quantum-mechanical point of view !
Photon exchangeWe will have a number like this for all three forces. Feynman writes the coupling constant for the electromagnetic force as j and the coupling constant for the strong force (i.e. the amplitude for a gluon to be emitted or absorbed by a quark) as g. [As for the weak force, he is rather short on that and actually doesn’t bother to introduce a symbol for it. I’ll come back on that later.]
The coupling constant is a dimensionless number and one can interpret it as the unit of ‘charge’ for the electromagnetic and strong force respectively. So the ‘charge’ q of a particle should be read as q times the coupling constant. Of course, we can argue about that unit. The elementary charge for electromagnetism was or is – historically – the charge of the proton (q = +1), but now the proton is no longer elementary: it consists of quarks with charge –1/3 and +2/3 (for the d and u quark) respectively (a proton consists of two u quarks and one d quark, so you can write it as uud). So what’s j then? Feynman doesn’t give its precise value but uses an approximate value of –0.1. It is an amplitude so it should be interpreted as a complex number to be added or multiplied with other complex numbers representing amplitudes – so –0.1 is “a shrink to about one-tenth, and half a turn.” [In these 1985 Lectures on QED, which he wrote for a lay audience, he calls amplitudes ‘arrows’, to be combined with other ‘arrows.’ In complex notation, –0.1 = 0.1eiπ = 0.1(cosπ + isinπ).]
Let me give a precise number. The coupling constant for the electromagnetic force is the so-called fine-structure constant, and it’s usually denoted by the alpha symbol (α). There is a remarkably easy formula for α, which becomes even easier if we fiddle with units to simplify the matter even more. Let me paraphrase Wikipedia on α here, because I have no better way of summarizing it (the summary is also nice as it shows how changing units – replacing the SI units by so-called natural units – can simplify equations):
1. There are three equivalent definitions of α in terms of other fundamental physical constants:
\alpha = \frac{k_\mathrm{e} e^2}{\hbar c} = \frac{1}{(4 \pi \varepsilon_0)} \frac{e^2}{\hbar c} = \frac{e^2 c \mu_0}{2 h}
where e is the elementary charge (so that’s the electric charge of the proton); ħ = h/2π is the reduced Planck constant; c is the speed of light (in vacuum); ε0 is the electric constant (i.e. the so-called permittivity of free space); µ0 is the magnetic constant (i.e. the so-called permeability of free space); and ke is the Coulomb constant.
2. In the old centimeter-gram-second variant of the metric system (cgs), the unit of electric charge is chosen such that the Coulomb constant (or the permittivity factor) equals 1. Then the expression of the fine-structure constant just becomes:
3. When using so-called natural units, we equate ε0 , c and ħ to 1. [That does not mean they are the same, but they just become the unit for measurement for whatever is measured in them. :-)] The value of the fine-structure constant then becomes:
\alpha = \frac{e^2}{4 \pi}.
Of course, then it just becomes a matter of choosing a value for e. Indeed, we still haven’t answered the question as to what we should choose as ‘elementary’: 1 or 1/3? If we take 1, then α is just a bit smaller than 0.08 (around 0.0795775 to be somewhat more precise). If we take 1/3 (the value for a quark), then we get a much smaller value: about 0.008842 (I won’t bother too much about the rest of the decimals here). Feynman’s (very) rough approximation of –0.1 obviously uses the historic proton charge, so e = +1.
The coupling constant for the strong force is much bigger. In fact, if we use the SI units (i.e. one of the three formulas for α under point 1 above), then we get an alpha equal to some 7.297×10–3. In fact, its value will usually be written as 1/α, and so we get a value of (roughly) 1/137. In this scheme of things, the coupling constant for the strong force is 1, so that’s 137 times bigger.
Coupling constants, interactions, and Feynman diagrams
So how does it work? The Wikipedia article on coupling constants makes an extremely useful distinction between the kinetic part and the proper interaction part of an ‘interaction’. Indeed, before we just blindly associate qubits with particles, it’s probably useful to not only look at how photon absorption and/or emission works, but also at how a process as common as photon scattering works (so we’re talking Compton scattering here – discovered in 1923, and it earned Compton a Nobel Prize !).
The illustration below separates the kinetic and interaction part properly: the photon and the electron are both deflected (i.e. the magnitude and/or direction of their momentum (p) changes) – that’s the kinetic part – but, in addition, the frequency of the photon (and, hence, its energy – cf. E = hν) is also affected – so that’s the interaction part I’d say.
Compton scattering
With an absorption or an emission, the situation is different, but it also involves frequencies (and, hence, energy levels), as show below: an electron absorbing a higher-energy photon will jump two or more levels as it absorbs the energy by moving to a higher energy level (i.e. a so-called excited state), and when it re-emits the energy, the emitted photon will have higher energy and, hence, higher frequency.
This business of frequencies and energy levels may not be so obvious when looking at those Feynman diagrams, but I should add that these Feynman diagrams are not just sketchy drawings: the time and space axis is precisely defined (time and distance are measured in equivalent units) and so the direction of travel of particles (photons, electrons, or whatever particle is depicted) does reflect the direction of travel and, hence, conveys precious information about both the direction as well as the magnitude of the momentum of those particles. That being said, a Feynman diagram does not care about a photon’s frequency and, hence, its energy (its velocity will always be c, and it has no mass, so we can’t get any information from its trajectory).
Let’s look at these Feynman diagrams now, and the underlying force model, which I refer to as the boson exchange model.
The boson exchange model
The quantum field model – for all forces – is a boson exchange model. In this model, electrons, for example, are kept in orbit through the continuous exchange of (virtual) photons between the proton and the electron, as shown below.
Electron-protonNow, I should say a few words about these ‘virtual’ photons. The most important thing is that you should look at them as being ‘real’. They may be derided as being only temporary disturbances of the electromagnetic field but they are very real force carriers in the quantum field theory of electromagnetism. They may carry very low energy as compared to ‘real’ photons, but they do conserve energy and momentum – in quite a strange way obviously: while it is easy to imagine a photon pushing an electron away, it is a bit more difficult to imagine it pulling it closer, which is what it does here. Nevertheless, that’s how forces are being mediated by virtual particles in quantum mechanics: we have matter particles carrying charge but neutral bosons taking care of the exchange between those charges.
In fact, note how Feynman actually cares about the possibility of one of those ‘virtual’ photons briefly disintegrating into an electron-positron pair, which underscores the ‘reality’ of photons mediating the electromagnetic force between a proton and an electron, thereby keeping them close together. There is probably no better illustration to explain the difference between quantum field theory and the classical view of forces, such as the classical view on gravity: there are no gravitons doing for gravity what photons are doing for electromagnetic attraction (or repulsion).
Pandora’s Box
I cannot resist a small digression here. The ‘Box of Pandora’ to which Feynman refers in the caption of the illustration above is the problem of calculating the coupling constants. Indeed, j is the coupling constant for an ‘ideal’ electron to couple with some kind of ‘ideal’ photon, but how do we calculate that when we actually know that all possible paths in spacetime have to be considered and that we have all of these ‘virtual’ mess going on? Indeed, in experiments, we can only observe probabilities for real electrons to couple with real photons.
In the ‘Chapter 4’ to which the caption makes a reference, he briefly explains the mathematical procedure, which he invented and for which he got a Nobel Prize. He calls it a ‘shell game’. It’s basically an application of ‘perturbation theory’, which I haven’t studied yet. However, he does so with skepticism about its mathematical consistency – skepticism which I mentioned and explored somewhat in previous posts, so I won’t repeat that here. Here, I’ll just note that the issue of ‘mathematical consistency’ is much more of an issue for the strong force, because the coupling constant is so big.
Indeed, terms with j2, j3jetcetera (i.e. the terms involved in adding amplitudes for all possible paths and all possible ways in which an event can happen) quickly become very small as the exponent increases, but terms with g2, g3getcetera do not become negligibly small. In fact, they don’t become irrelevant at all. Indeed, if we wrote α for the electromagnetic force as 7.297×10–3, then the α for the strong force is one, and so none of these terms becomes vanishingly small. I won’t dwell on this, but just quote Wikipedia’s very succinct appraisal of the situation: “If α is much less than 1 [in a quantum field theory with a dimensionless coupling constant α], then the theory is said to be weakly coupled. In this case it is well described by an expansion in powers of α called perturbation theory. [However] If the coupling constant is of order one or larger, the theory is said to be strongly coupled. An example of the latter [the only example as far as I am aware: we don’t have like a dozen different forces out there !] is the hadronic theory of strong interactions, which is why it is called strong in the first place. [Hadrons is just a difficult word for particles composed of quarks – so don’t worry about it: you understand what is being said here.] In such a case non-perturbative methods have to be used to investigate the theory.”
Hmm… If Feynman thought his technique for calculating weak coupling constants was fishy, then his skepticism about whether or not physicists actually know what they are doing when calculating stuff using the strong coupling constant is probably justified. But let’s come back on that later. With all that we know here, we’re ready to present a picture of the ‘first-generation world’.
The first-generation world
The first-generation is our world, excluding all that goes in those particle accelerators, where they discovered so-called second- and third-generation matter – but I’ll come back to that. Our world consists of only four matter particles, collectively referred to as (first-generation) fermions: two quarks (a u and a d type), the electron, and the neutrino. This is what is shown below.
first-generation matter
Indeed, u and d quarks make up protons and neutrons (a proton consists of two u quarks and one d quark, and a neutron must be neutral, so it’s two d quarks and one u quark), and then there’s electrons circling around them and so that’s our atoms. And from atoms, we make molecules and then you know the rest of the story. Genesis !
Oh… But why do we need the neutrino? [Damn – you’re smart ! You see everything, don’t you? :-)] Well… There’s something referred to as beta decay: this allows a neutron to become a proton (and vice versa). Beta decay explains why carbon-14 will spontaneously decay into nitrogen-14. Indeed, carbon-12 is the (very) stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). Now, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) or β+ decay (e.g. magnesium-23 into sodium-23). If we have β decay, then some electron will be flying out in order to make sure the atom as a whole stays electrically neutral. If it’s β+ decay, then emitting a positron will do the job (I forgot to mention that each of the particles above also has a anti-matter counterpart – but don’t think I tried to hide anything else: the fermion picture above is pretty complete). That being said, Wolfgang Pauli, one of those geniuses who invented quantum theory, noted, in 1930 already, that some momentum and energy was missing, and so he predicted the emission of this mysterious neutrinos as well. Guess what? These things are very spooky (relatively high-energy neutrinos produced by stars (our Sun in the first place) are going through your and my my body, right now and right here, at a rate of some hundred trillion per second) but, because they are so hard to detect, the first actual trace of their existence was found in 1956 only. [Neutrino detection is fairly standard business now, however.] But back to quarks now.
Quarks are held together by gluons – as you probably know. Quarks come in flavors (u and d), but gluons come in ‘colors’. It’s a bit of a stupid name but the analogy works great. Quarks exchange gluons all of the time and so that’s what ‘glues’ them so strongly together. Indeed, the so-called ‘mass’ that gets converted into energy when a nuclear bomb explodes is not the mass of quarks (their mass is only 2.4 and 4.8 MeV/c2. Nuclear power is binding energy between quarks that gets converted into heat and radiation and kinetic energy and whatever else a nuclear explosion unleashes. That binding energy is reflected in the difference between the mass of a proton (or a neutron) – around 938 MeV/c2 – and the mass figure you get when you add two u‘s and one d, which is them 9.6 MeV/c2 only. This ratio – a factor of one hundred – illustrates once again the strength of the strong force: 99% of the ‘mass’ of a proton or an electron is due to the strong force.
But I am digressing too much, and I haven’t even started to talk about the bosons associated with the weak force. Well… I won’t just now. I’ll just move on the second- and third-generation world.
Second- and third-generation matter
When physicists started to look for those quarks in their particle accelerators, Nature had already confused them by producing lots of other particles in these accelerators: in the 1960s, there were more than four hundred of them. Yes. Too much. But they couldn’t get them back in the box. 🙂
Now, all these ‘other particles’ are unstable but they survive long enough – a muon, for example, disintegrates after 2.2 millionths of a second (on average) – to deserve the ‘particle’ title, as opposed to a ‘resonance’, whose lifetime can be as short as a billionth of a trillionth of a second. And so, yes, the physicists had to explain them too. So the guys who devised the quark-gluon model (the model is usually associated with Murray Gell-Mann but – as usual with great ideas – some others worked hard on it as well) had already included heavier versions of their quarks to explain (some of) these other particles. And so we do not only have heavier quarks, but also a heavier version of the electron (that’s the muon I mentioned) as well as a heavier version of the neutrino (the so-called muon neutrino). The two new ‘flavors’ of quarks were called s and c. [Feynman hates these names but let me give them: u stands for up, d for down, s for strange and c for charm. Why? Well… According to Feynman: “For no reason whatsoever.”]
Traces of the second-generation and c quarks were found in experiments in 1968 and 1974 respectively (it took six years to boost the particle accelerators sufficiently), and the third-generation quark (for beauty or bottom – whatever) popped up in Fermilab‘s particle accelerator in 1978. To be fully complete, it then took 17 years to detect the super-heavy t quark – which stands for truth. [Of all the quarks, this name is probably the nicest: “If beauty, then truth” – as Lederman and Schramm write in their 1989 history of all of this.]
What’s next? Will there be a fourth or even fifth generation? Back in 1985, Feynman didn’t exclude it (and actually seemed to expect it), but current assessments are more prosaic. Indeed, Wikipedia writes that, According to the results of the statistical analysis by researchers from CERN and the Humboldt University of Berlin, the existence of further fermions can be excluded with a probability of 99.99999% (5.3 sigma).” If you want to know why… Well… Read the rest of the Wikipedia article. It’s got to do with the Higgs particle.
So the complete model of reality is the one I already inserted in a previous post and, if you find it complicated, remember that the first generation of matter is the one that matters and, among the bosons, it’s the photons and gluons. If you focus on these only, it’s not complicated at all – and surely a huge improvement over those 400+ particles no one understood in the 1960s.
As for the interactions, quarks stick together – and rather firmly so – by interchanging gluons. They thereby ‘change color’ (which is the same as saying there is some exchange of ‘charge’). I copy Feynman’s original illustration hereunder (not because there’s no better illustration: the stuff you can find on Wikipedia has actual colors !) but just because it’s reflects the other illustrations above (and, perhaps, maybe I also want to make sure – with this black-and-white thing – that you don’t think there’s something like ‘real’ color inside of a nucleus).
quark gluon exchange
So what are the loose ends then? The problem of ‘mathematical consistency’ associated with the techniques used to calculate (or estimate) these coupling constants – which Feynman identifies as a key defect in 1985 – is is a form of skepticism about the Standard Model that is not shared by others. It’s more about the other forces. So let’s now talk about these.
The weak force as the weird force: about symmetry breaking
I included the weak force in the title of one of the sub-sections above (“The three-forces model”) and then talked about the other two forces only. The W, W and Z bosons – usually referred to, as a group, as the W bosons, or the ‘intermediate vector bosons’ – are an odd bunch. First, note that they are the only ones that do not only have a (rest) mass (and not just a little bit: they’re almost 100 times heavier than a the proton or neutron – or a hydrogen atom !) but, on top of that, they also have electric charge (except for the Z boson). They are really the odd ones out. Feynman does not doubt their existence (a Fermilab team produced them in 1983, and they got a Nobel Prize for it, so little room for doubts here !), but it is obvious he finds the weak force interaction model rather weird.
He’s not the only one: in a wonderful publication designed to make a case for more powerful particle accelerators (probably successful, because the Large Hadron Collider came through – and discovered credible traces of the Higgs field, which is involved in the story that is about to follow), Leon Lederman and David Schramm look at the asymmety involved in having massive W bosons and massless photons and gluons, as just one of the many asymmetries associated with the weak force. Let me develop this point.
We like symmetries. They are aesthetic. But so I am talking something else here: in classical physics, characterized by strict causality and determinism, we can – in theory – reverse the arrow of time. In practice, we can’t – because of entropy – but, in theory, so-called reversible machines are not a problem. However, in quantum mechanics we cannot reverse time for reasons that have nothing to do with thermodynamics. In fact, there are several types of symmetries in physics:
1. Parity (P) symmetry revolves around the notion that Nature should not distinguish between right- and left-handedness, so everything that works in our world, should also work in the mirror world. Now, the weak force does not respect P symmetry. That was shown by experiments on the decay of pions, muons and radioactive cobalt-60 in 1956 and 1957 already.
2. Charge conjugation or charge (C) symmetry revolves around the notion that a world in which we reverse all (electric) charge signs (so protons would have minus one as charge, and electrons have plus one) would also just work the same. The same 1957 experiments showed that the weak force does also not respect C symmetry.
3. Initially, smart theorists noted that the combined operation of CP was respected by these 1957 experiments (hence, the principle of P and C symmetry could be substituted by a combined CP symmetry principle) but, then, in 1964, Val Fitch and James Cronin, proved that the spontaneous decay of neutral kaons (don’t worry if you don’t know what particle this is: you can look it up) into pairs of pions did not respect CP symmetry. In other words, it was – again – the weak force not respecting symmetry. [Fitch and Cronin got a Nobel Prize for this, so you can imagine it did mean something !]
4. We mentioned time reversal (T) symmetry: how is that being broken? In theory, we can imagine a film being made of those events not respecting P, C or CP symmetry and then just pressing the ‘reverse’ button, can’t we? Well… I must admit I do not master the details of what I am going to write now, but let me just quote Lederman (another Nobel Prize physicist) and Schramm (an astrophysicist): “Years before this, [Wolfgang] Pauli [Remember him from his neutrino prediction?] had pointed out that a sequence of operations like CPT could be imagined and studied; that is, in sequence, change all particles to antiparticles, reflect the system in a mirror, and change the sign of time. Pauli’s theorem was that all nature respected the CPT operation and, in fact, that this was closely connected to the relativistic invariance of Einstein’s equations. There is a consensus that CPT invariance cannot be broken – at least not at energy scales below 1019 GeV [i.e. the Planck scale]. However, if CPT is a valid symmetry, then, when Fitch and Cronin showed that CP is a broken symmetry, they also showed that T symmetry must be similarly broken.” (Lederman and Schramm, 1989, From Quarks to the Cosmos, p. 122-123)
So the weak force doesn’t care about symmetries. Not at all. That being said, there is an obvious difference between the asymmetries mentioned above, and the asymmetry involved in W bosons having mass and other bosons not having mass. That’s true. Especially because now we have that Higgs field to explain why W bosons have mass – and not only W bosons but also the matter particles (i.e. the three generations of leptons and quarks discussed above). The diagram shows what interacts with what.
2000px-Elementary_particle_interactions.svgBut so the Higgs field does not interact with photons and gluons. Why? Well… I am not sure. Let me copy the Wikipedia explanation: “The Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W– and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realized as) the massive Higgs boson.”
Huh? […] This ‘answer’ probably doesn’t answer your question. What I understand from the explanation above, is that the Higgs field only interacts with W bosons because its (theoretical) structure is such that it only interacts with W bosons. Now, you’ll remember Feynman’s oft-quoted criticism of string theory: I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say.” Is the Higgs theory such cooked-up explanation? No. That kind of criticism would not apply here, in light of the fact that – some 50 years after the theory – there is (some) experimental confirmation at least !
But you’ll admit it does all look ‘somewhat ugly.’ However, while that’s a ‘loose end’ of the Standard Model, it’s not a fundamental defect or so. The argument is more about aesthetics, but then different people have different views on aesthetics – especially when it comes to mathematical attractiveness or unattractiveness.
So… No real loose end here I’d say.
The other ‘loose end’ that Feynman mentions in his 1985 summary is obviously still very relevant today (much more than his worries about the weak force I’d say). It is the lack of a quantum theory of gravity. There is none. Of course, the obvious question is: why would we need one? We’ve got Einstein’s theory, don’t we? What’s wrong with it?
The short answer to the last question is: nothing’s wrong with it – on the contrary ! It’s just that it is – well… – classical physics. No uncertainty. As such, the formalism of quantum field theory cannot be applied to gravity. That’s it. What’s Feynman’s take on this? [Sorry I refer to him all the time, but I made it clear in the introduction of this post that I would be discussing ‘his’ loose ends indeed.] Well… He makes two points – a practical one and a theoretical one:
1. “Because the gravitation force is so much weaker than any of the other interactions, it is impossible at the present time to make any experiment that is sufficiently delicate to measure any effect that requires the precision of a quantum theory to explain it.”
Feynman is surely right about gravity being ‘so much weaker’. Indeed, you should note that, at a scale of 10–13 cm (that’s the picometer scale – so that’s the relevant scale indeed at the sub-atomic level), the coupling constants compare as follows: if the coupling constant of the strong force is 1, the coupling constant of the electromagnetic force is approximately 1/137, so that’s a factor of 10–2 approximately. The strength of the weak force as measured by the coupling constant would be smaller with a factor 10–13 (so that’s 1/10000000000000 smaller). Incredibly small, but so we do have a quantum field theory for the weak force ! However, the coupling constant for the gravitational force involves a factor 10–38. Let’s face it: this is unimaginably small.
However, Feynman wrote this in 1985 (i.e. thirty years ago) and scientists wouldn’t be scientists if they would not at least try to set up some kind of experiment. So there it is: LIGO. Let me quote Wikipedia on it:
LIGO, which stands for the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment aiming to directly detect gravitation waves. […] At the cost of $365 million (in 2002 USD), it is the largest and most ambitious project ever funded by the NSF. Observations at LIGO began in 2002 and ended in 2010; no unambiguous detections of gravitational waves have been reported. The original detectors were disassembled and are currently being replaced by improved versions known as “Advanced LIGO”.
So, let’s see what comes out of that. I won’t put my money on it just yet. 🙂 Let’s go to the theoretical problem now.
2. “Even though there is no way to test them, there are, nevertheless, quantum theories of gravity that involve ‘gravitons’ (which would appear under a new category of polarizations, called spin “2”) and other fundamental particles (some with spin 3/2). The best of these theories is not able to include the particles that we do find, and invents a lot of particles that we don’t find. [In addition] The quantum theories of gravity also have infinities in the terms with couplings [Feynman does not refer to a coupling constant but to a factor n appearing in the so-called propagator for an electron – don’t worry about it: just note it’s a problem with one of those constants actually being larger than one !], but the “dippy process” that is successful in getting rid of the infinities in quantum electrodynamics doesn’t get rid of them in gravitation. So not only have we no experiments with which to check a quantum theory of gravitation, we also have no reasonable theory.”
Phew ! After reading that, you wouldn’t apply for a job at that LIGO facility, would you? That being said, the fact that there is a LIGO experiment would seem to undermine Feynman’s practical argument. But then is his theoretical criticism still relevant today? I am not an expert, but it would seem to be the case according to Wikipedia’s update on it:
“Although a quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, difficulties arise when one attempts to apply the usual prescriptions of quantum field theory. From a technical point of view, the problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, the most popular approaches being string theory and loop quantum gravity.”
Hmm… String theory and loop quantum gravity? That’s the stuff that Penrose is exploring. However, I’d suspect that for these (string theory and loop quantum gravity), Feynman’s criticism probably still rings true – to some extent at least:
I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say, “Well, it might be true.” For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s all possible mathematically, but why not seven? When they write their equation, the equation should decide how many of these things get wrapped up, not the desire to agree with experiment. In other words, there’s no reason whatsoever in superstring theory that it isn’t eight out of the ten dimensions that get wrapped up and that the result is only two dimensions, which would be completely in disagreement with experience. So the fact that it might disagree with experience is very tenuous, it doesn’t produce anything; it has to be excused most of the time. It doesn’t look right.”
What to say by way of conclusion? Not sure. I think my personal “research agenda” is reasonably simple: I just want to try to understand all of the above somewhat better and then, perhaps, I might be able to understand some of what Roger Penrose is writing. 🙂
Bad thinking: photons versus the matter wave
In my previous post, I wrote that I was puzzled by that relation between the energy and the size of a particle: higher-energy photons are supposed to be smaller and, pushing that logic to the limit, we get photons becoming black holes at the Planck scale. Now, understanding what the Planck scale is all about, is important to understand why we’d need a GUT, and so I do want to explore that relation between size and energy somewhat further.
I found the answer by a coincidence. We’ll call it serendipity. 🙂 Indeed, an acquaintance of mine who is very well versed in physics pointed out a terrible mistake in (some of) my reasoning in the previous posts: photons do not have a de Broglie wavelength. They just have a wavelength. Full stop. It immediately reduced my bemusement about that energy-size relation and, in the end, eliminated it completely. So let’s analyze that mistake – which seems to be a fairly common freshman mistake judging from what’s being written about it in some of the online discussions on physics.
If photons are not to be associated with a de Broglie wave, it basically means that the Planck relation has nothing to do with the de Broglie relation, even if these two relations are identical from a pure mathematical point of view:
1. The Planck relation E = hν states that electromagnetic waves with frequency ν are a bunch of discrete packets of energy referred to as photons, and that the energy of these photons is proportional to the frequency of the electromagnetic wave, with the Planck constant h as the factor of proportionality. In other words, the natural unit to measure their energy is h, which is why h is referred to as the quantum of action.
2. The de Broglie relation E = hf assigns de Broglie wave with frequency f to a matter particle with energy E = mc2 = γm0c2. [The factor γ in this formula is the Lorentz factor: γ = (1 – v2/c2)–1/2. It just corrects for the relativistic effect on mass as the velocity of the particle (v) gets closer to the speed of light (c).]
These are two very different things: photons do not have rest mass (which is why they can travel at light speed) and, hence, they are not to be considered as matter particles. Therefore, one should not assign a de Broglie wave to them. So what are they then? A photon is a wave packet but it’s an electromagnetic wave packet. Hence, its wave function is not some complex-valued psi function Ψ(x, t). What is oscillating in the illustration below (let’s say this is a procession of photons) is the electric field vector E. [To get the full picture of the electromagnetic wave, you should also imagine a (tiny) magnetic field vector B, which oscillates perpendicular to E), but that does not make much of a difference. Finally, in case you wonder about these dots: the red and green dot just make it clear that phase and group velocity of the wave are the same: vg = vp = v = c.] Wave - same group and phase velocityThe point to note is that we have a real wave here: it is not de Broglie wave. A de Broglie wave is a complex-valued function Ψ(x, t) with two oscillating parts: (i) the so-called real part of the complex value Ψ, and (ii) the so-called imaginary part (and, despite its name, that counts as much as the real part when working with Ψ !). That’s what’s shown in the examples of complex (standing) waves below: the blue part is one part (let’s say the real part), and then the salmon color is the other part. We need to square the modulus of that complex value to find the probability P of detecting that particle in space at point x at time t: P(x, t) = |Ψ(x, t)|2. Now, if we would write Ψ(x, t) as Ψ = u(x, t) + iv(x, t), then u(x, t) is the real part, and v(x, t) is the imaginary part. |Ψ(x, t)|2 is then equal to u2 + u2 so that shows that both the blue as well as the salmon amplitude matter when doing the math.
So, while I may have given the impression that the Planck relation was like a limit of the de Broglie relation for particles with zero rest mass traveling at speed c, that’s just plain wrong ! The description of a particle with zero rest mass fits a photon but the Planck relation is not the limit of the de Broglie relation: photons are photons, and electrons are electrons, and an electron wave has nothing to do with a photon. Electrons are matter particles (fermions as physicists would say), and photons are bosons, i.e. force carriers.
Let’s now re-examine the relationship between the size and the energy of a photon. If the wave packet below would represent an (ideal) photon, what is its energy E as a function of the electric and magnetic field vectors E and B[Note that the (non-boldface) E stands for energy (i.e. a scalar quantity, so it’s just a number) indeed, while the (italic and bold) E stands for the (electric) field vector (so that’s something with a magnitude (E – with the symbol in italics once again to distinguish it from energy E) and a direction).] Indeed, if a photon is nothing but a disturbance of the electromagnetic field, then the energy E of this disturbance – which obviously depends on E and B – must also be equal to E = hν according to the Planck relation. Can we show that?
Well… Let’s take a snapshot of a plane-wave photon, i.e. a photon oscillating in a two-dimensional plane only. That plane is perpendicular to our line of sight here:
Because it’s a snapshot (time is not a variable), we may look at this as an electrostatic field: all points in the interval Δx are associated with some magnitude (i.e. the magnitude of our electric field E), and points outside of that interval have zero amplitude. It can then be shown (just browse through any course on electromagnetism) that the energy density (i.e. the energy per unit volume) is equal to (1/2)ε0Eis the electric constant which we encountered in previous posts already). To calculate the total energy of this photon, we should integrate over the whole distance Δx, from left to right. However, rather than bothering you with integrals, I think that (i) the ε0E2/2 formula and (ii) the illustration above should be sufficient to convince you that:
1. The energy of a photon is proportional to the square of the amplitude of the electric field. Such E ∝ Arelation is typical of any real wave, be they water waves or electromagnetic waves. So if we would double, triple, or quadruple its amplitude (i.e. the magnitude E of the electric field E), then the energy of this photon with be multiplied with four, nine times and sixteen respectively.
2. If we would not change the amplitude of the wave above but double, triple or quadruple its frequency, then we would only double, triple or quadruple its energy: there’s no exponential relation here. In other words, the Planck relation E = hν makes perfect sense, because it reflects that simple proportionality: there is nothing to be squared.
3. If we double the frequency but leave the amplitude unchanged, then we can imagine a photon with the same energy occupying only half of the Δx space. In fact, because we also have that universal relationship between frequency and wavelength (the propagation speed of a wave equals the product of its wavelength and its frequency: v = λf), we would have to halve the wavelength (and, hence, that would amount to dividing the Δx by two) to make sure our photon is still traveling at the speed of light.
Now, the Planck relation only says that higher energy is associated with higher frequencies: it does not say anything about amplitudes. As mentioned above, if we leave amplitudes unchanged, then the same Δx space will accommodate a photon with twice the frequency and twice the energy. However, if we would double both frequency and amplitude, then the photon would occupy only half of the Δx space, and still have twice as much energy. So the only thing I now need to prove is that higher-frequency electromagnetic waves are associated with larger-amplitude E‘s. Now, while that is something that we get straight out of the the laws of electromagnetic radiation: electromagnetic radiation is caused by oscillating electric charges, and it’s the magnitude of the acceleration (written as a in the formula below) of the oscillating charge that determines the amplitude. Indeed, for a full write-up of these ‘laws’, I’ll refer to a textbook (or just download Feynman’s 28th Lecture on Physics), but let me just give the formula for the (vertical) component of E: EMR law
You will recognize all of the variables and constants in this one: the electric constant ε0, the distance r, the speed of light (and our wave) c, etcetera. The ‘a’ is the acceleration: note that it’s a function not of t but of (t – r/c), and so we’re talking the so-called retarded acceleration here, but don’t worry about that.
Now, higher frequencies effectively imply a higher magnitude of the acceleration vector, and so that’s what’s I had to prove and so we’re done: higher-energy photons not only have higher frequency but also larger amplitude, and so they take less space.
It would be nice if I could derive some kind of equation to specify the relation between energy and size, but I am not that advanced in math (yet). 🙂 I am sure it will come.
Post scriptum 1: The ‘mistake’ I made obviously fully explains why Feynman is only interested in the amplitude of a photon to go from point A to B, and not in the amplitude of a photon to be at point x at time t. The question of the ‘size of the arrows’ then becomes a question related to the so-called propagator function, which gives the probability amplitude for a particle (a photon in this case) to travel from one place to another in a given time. The answer seems to involve another important buzzword when studying quantum mechanics: the gauge parameter. However, that’s also advanced math which I don’t master (as yet). I’ll come back on it… Hopefully… 🙂
Post scriptum 2: As I am re-reading some of my post now (i.e. on 12 January 2015), I noted how immature this post is. I wanted to delete it, but finally I didn’t, as it does illustrate my (limited) progress. I am still struggling with the question of a de Broglie wave for a photon, but I dare to think that my analysis of the question at least is a bit more mature now: please see one of my other posts on it.
Re-visiting the Uncertainty Principle
Let me, just like Feynman did in his last lecture on quantum electrodynamics for Alix Mautner, discuss some loose ends. Unlike Feynman, I will not be able to tie them up. However, just describing them might be interesting and perhaps you, my imaginary reader, could actually help me with tying them up ! Let’s first re-visit the wave function for a photon by way of introduction.
The wave function for a photon
Let’s not complicate things from the start and, hence, let’s first analyze a nice Gaussian wave packet, such as the right-hand graph below: Ψ(x, t). It could be a de Broglie wave representing an electron but here we’ll assume the wave packet might actually represent a photon. [Of course, do remember we should actually show both the real as well as the imaginary part of this complex-valued wave function but we don’t want to clutter the illustration and so it’s only one of the two (cosine or sine). The ‘other’ part (sine or cosine) is just the same but with a phase shift. Indeed, remember that a complex number reθ is equal to r(cosθ + isinθ), and the shape of the sine function is the same as the cosine function but it’s shifted to the left with π/2. So if we have one, we have the other. End of digression.]
example of wave packet
The assumptions associated with this wonderful mathematical shape include the idea that the wave packet is a composite wave consisting of a large number of harmonic waves with wave numbers k1, k2k3,… all lying around some mean value μk. That is what is shown in the left-hand graph. The mean value is actually noted as k-bar in the illustration above but because I can’t find a k-bar symbol among the ‘special characters’ in the text editor tool bar here, I’ll use the statistical symbols μ and σ to represent a mean value (μ) and some spread around it (σ). In any case, we have a pretty normal shape here, resembling the Gaussian distribution illustrated below.
normal probability distributionThese Gaussian distributions (also known as a density function) have outliers, but you will catch 95,4% of the observations within the μ ± 2σ interval, and 99.7% within the μ ± 3σ interval (that’s the so-called two- and three-sigma rule). Now, the shape of the left-hand graph of the first illustration, mapping the relation between k and A(k), is the same as this Gaussian density function, and if you would take a little ruler and measure the spread of k on the horizontal axis, you would find that the values for k are effectively spread over an interval that’s somewhat bigger than k-bar plus or minus 2Δk. So let’s say 95,4% of the values of k lie in the interval [μ– 2Δk, μk + 2Δk]. Hence, for all practical purposes, we can write that μ– 2Δk < k< μ+ 2Δk. In any case, we do not care too much about the rest because their contribution to the amplitude of the wave packet is minimal anyway, as we can see from that graph. Indeed, note that the A(k) values on the vertical axis of that graph do not represent the density of the k variable: there is only one wave number for each component wave, and so there’s no distribution or density function of k. These A(k) numbers represent the (maximum) amplitude of the component waves of our wave packet Ψ(x, t). In short, they are the values A(k) appearing in the summation formula for our composite wave, i.e. the wave packet:
Wave packet summation
I don’t want to dwell much more on the math here (I’ve done that in my other posts already): I just want you to get a general understanding of that ‘ideal’ wave packet possibly representing a photon above so you can follow the rest of my story. So we have a (theoretical) bunch of (component) waves with different wave numbers kn, and the spread in these wave numbers – i.e. 2Δk, or let’s take 4Δk to make sure we catch (almost) all of them – determines the length of the wave packet Ψ, which is written here as 2Δx, or 4Δx if we’d want to include (most of) the tail ends as well. What else can we say about Ψ? Well… Maybe something about velocities and all that? OK.
To calculate velocities, we need both ω and k. Indeed, the phase velocity of a wave (vp) is equal to v= ω/k. Now, the wave number k of the wave packet itself – i.e. the wave number of the oscillating ‘carrier wave’ so to say – should be equal to μaccording to the article I took this illustration from. I should check that but, looking at that relationship between A(k) and k, I would not be surprised if the math behind is right. So we have the k for the wave packet itself (as opposed to the k’s of its components). However, I also need the angular frequency ω.
So what is that ω? Well… That will depend on all the ω’s associated with all the k’s, isn’t it? It does. But, as I explained in a previous post, the component waves do not necessarily have to travel all at the same speed and so the relationship between ω and k may not be simple. We would love that, of course, but Nature does what it wants. The only reasonable constraint we can impose on all those ω’s is that they should be some linear function of k. Indeed, if we do not want our wave packet to dissipate (or disperse or, to put it even more plainly, to disappear), then the so-called dispersion relation ω = ω(k) should be linear, so ωn should be equal to ω= ak+ b. What a and b? We don’t know. Random constants. But if the relationship is not linear, then the wave packet will disperse and it cannot possibly represent a particle – be it an electron or a photon.
I won’t go through the math all over again but in my Re-visiting the Matter Wave (I), I used the other de Broglie relationship (E = ħω) to show that – for matter waves that do not disperse – we will find that the phase velocity will equal c/β, with β = v/c, i.e. the ratio of the speed of our particle (v) and the speed of light (c). But, of course, photons travel at the speed of light and, therefore, everything becomes very simple and the phase velocity of the wave packet of our photon would equal the group velocity. In short, we have:
v= ω/k = v= ∂ω/∂k = c
Of course, I should add that the angular frequency of all component waves will also be equal to ω = ck, so all component waves of the wave packet representing a photon are supposed to travel at the speed of light! What an amazingly simple result!
It is. In order to illustrate what we have here – especially the elegance and simplicity of that wave packet for a photon – I’ve uploaded two gif files (see below). The first one could represent our ‘ideal’ photon: group and phase velocity (represented by the speed of the green and red dot respectively) are the same. Of course, our ‘ideal’ photon would only be one wave packet – not a bunch of them like here – but then you may want to think that the ‘beam’ below might represent a number of photons following each other in a regular procession.
Wave - same group and phase velocity
The second animated gif below shows how phase and group velocity can differ. So that would be a (bunch of) wave packets representing a particle not traveling at the speed of light. The phase velocity here is faster than the group velocity (the red dot travels faster than the green dot). [One can actually also have a wave with positive group velocity and negative phase velocity – quite interesting ! – but so that would not represent a particle wave.] Again, a particle would be represented by one wave packet only (so that’s the space between two green dots only) but, again, you may want to think of this as representing electrons following each other in a very regular procession.
Wave_groupThese illustrations (which I took, once again, from the online encyclopedia Wikipedia) are a wonderful pedagogic tool. I don’t know if it’s by coincidence but the group velocity of the second wave is actually somewhat slower than the first – so the photon versus electron comparison holds (electrons are supposed to move (much) slower). However, as for the phase velocities, they are the same for both waves and that would not reflect the results we found for matter waves. Indeed, you may or may not remember that we calculated superluminal speeds for the phase velocity of matter waves in that post I mentioned above (Re-visiting the Matter Wave): an electron traveling at a speed of 0.01c (1% of the speed of light) would be represented by a wave packet with a group velocity of 0.01c indeed, but its phase velocity would be 100 times the speed of light, i.e. 100c. [That being said, the second illustration may be interpreted as a little bit correct as the red dot does travel faster than the green dot, which – as I explained – is not necessarily always the case when looking at such composite waves (we can have slower or even negative speeds).]
Of course, I should once again repeat that we should not think that a photon or an electron is actually wriggling through space like this: the oscillation only represent the real or imaginary part of the complex-valued probability amplitude associated with our ‘ideal’ photon or our ‘ideal’ electron. That’s all. So this wave is an ‘oscillating complex number’, so to say, whose modulus we have to square to get the probability to actually find the photon (or electron) at some point x and some time t. However, the photon (or the electron) itself are just moving straight from left to right, with a speed matching the group velocity of their wave function.
Are they?
Well… No. Or, to be more precise: maybe. WHAT? Yes, that’s surely one ‘loose end’ worth mentioning! According to QED, photons also have an amplitude to travel faster or slower than light, and they are not necessarily moving in a straight line either. WHAT? Yes. That’s the complicated business I discussed in my previous post. As for the amplitudes to travel faster or slower than light, Feynman dealt with them very summarily. Indeed, you’ll remember the illustration below, which shows that the contributions of the amplitudes associated with slower or faster speed than light tend to nil because (a) their magnitude (or modulus) is smaller and (b) they point in the ‘wrong’ direction, i.e. not the direction of travel.
Contribution interval
Still, these amplitudes are there and – Shock, horror ! – photons also have an amplitude to not travel in a straight line, especially when they are forced to travel through a narrow slit, or right next to some obstacle. That’s diffraction, described as “the apparent bending of waves around small obstacles and the spreading out of waves past small openings” in Wikipedia.
Diffraction is one of the many phenomena that Feynman deals with in his 1985 Alix G. Mautner Memorial Lectures. His explanation is easy: “not enough arrows” – read: not enough amplitudes to add. With few arrows, there are also few that cancel out indeed, and so the final arrow for the event is quite random, as shown in the illustrations below.
Many arrows Few arrows
So… Not enough arrows… Feynman adds the following on this: “[For short distances] The nearby, nearly straight paths also make important contributions. So light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. In the same way, a mirror has to have enough size to reflect normally; if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.” (QED, 1985, p. 54-56)
Not enough arrows… What does he mean by that? Not enough photons? No. Diffraction for photons works just the same as for electrons: even if the photons would go through the slit one by one, we would have diffraction (see my Revisiting the Matter Wave (II) post for a detailed discussion of the experiment). So even one photon is likely to take some random direction left or right after going through a slit, rather than to go straight. Not enough arrows means not enough amplitudes. But what amplitudes is he talking about?
These amplitudes have nothing to do with the wave function of our ideal photon we were discussing above: that’s the amplitude Ψ(x, t) of a photon to be at point x at point t. The amplitude Feynman is talking about is the amplitude of a photon to go from point A to B along one of the infinitely many possible paths it could take. As I explained in my previous post, we have to add all of these amplitudes to arrive at one big final arrow which, over longer distances, will usually be associated with a rather large probability that the photon will travel in a straight line and at the speed of light – which is why light seems to do at a macro-scale. 🙂
But back to that very succinct statement: not enough arrows. That’s obviously a very relative statement. Not enough as compared to what? What measurement scale are we talking about here? It’s obvious that the ‘scale’ of these arrows for electrons is different than for photons, because the 2012 diffraction experiment with electrons that I referred to used 50 nanometer slits (50×10−9 m), while one of the many experiments demonstrating light diffraction using pretty standard (red) laser light used slits of some 100 micrometer (that 100×10−6 m or – in units you are used to – 0.1 millimeter).
The key to the ‘scale’ here is the wavelength of these de Broglie waves: the slit needs to be ‘small enough’ as compared to these de Broglie wavelengths. For example, the width of the slit in the laser experiment corresponded to (roughly) 100 times the wavelength of the laser light, and the (de Broglie) wavelength of the electrons in that 2012 diffraction experiment was 50 picometer – that was actually a thousand times the electron wavelength – but it was OK enough to demonstrate diffraction. Much larger slits would not have done the trick. So, when it comes to light, we have diffraction at scales that do not involve nanotechnology, but when it comes to matter particles, we’re not talking micro but nano: that’s thousand times smaller.
The weird relation between energy and size
Let’s re-visit the Uncertainty Principle, even if Feynman says we don’t need that (we just need to do the amplitude math and we have it all). We wrote the uncertainty principle using the more scientific Kennard formulation: σxσ≥ ħ/2, in which the sigma symbol represents the standard deviation of position x and momentum p respectively. Now that’s confusing, you’ll say, because we were talking wave numbers, not momentum in the introduction above. Well… The wave number k of a de Broglie wave is, of course, related to the momentum p of the particle we’re looking at: p = ħk. Hence, a spread in the wave numbers amounts to a spread in the momentum really and, as I wanted to talk scales, let’s now check the dimensions.
The value for ħ is about 1×10–34 Joule·seconds (J·s) (it’s about 1.054571726(47)×10−34 but let’s go with the gross approximation as for now). One J·s is the same as one kg·m2/s because 1 Joule is a shorthand for km kg·m2/s2. It’s a rather large unit and you probably know that physicists prefer electronVolt·seconds (eV·s) because of that. However, even in expressed in eV·s the value for ħ comes out astronomically small6.58211928(15)×10−16 eV·s. In any case, because the J·s makes dimensions come out right, I’ll stick to it for a while. What does this incredible small factor of proportionality, both in the de Broglie relations as well in that Kennard formulation of the uncertainty principle, imply? How does it work out from a math point of view?
Well… It’s literally a quantum of measurement: even if Feynman says the uncertainty principle should just be seen “in its historical context”, and that “we don’t need it for adding arrows”, it is a consequence of the (related) position-space and momentum-space wave functions for a particle. In case you would doubt that, check it on Wikipedia: the author of the article on the uncertainty principle derives it from these two wave functions, which form a so-called Fourier transform pair. But so what does it say really?
Look at it. First, it says that we cannot know any of the two values exactly (exactly means 100%) because then we have a zero standard deviation for one or the other variable, and then the inequality makes no sense anymore: zero is obviously not greater or equal to 0.527286×10–34 J·s. However, the inequality with the value for ħ plugged in shows how close to zero we can get with our measurements. Let’s check it out.
Let’s use the assumption that two times the standard deviation (written as 2Δk or 2Δx on or above the two graphs in the very first illustration of this post) sort of captures the whole ‘range’ of the variable. It’s not a bad assumption: indeed, if Nature would follow normal distributions – and in our macro-world, that seems to be the case – then we’d capture 95.4 of them, so that’s good. Then we can re-write the uncertainty principle as:
Δx·σ≥ ħ or σx·Δp ≥ ħ
So that means we know x within some interval (or ‘range’ if you prefer that term) Δx or, else, we know p within some interval (or ‘range’ if you prefer that term) Δp. But we want to know both within some range, you’ll say. Of course. In that case, the uncertainty principle can be written as:
Δx·Δp ≥ 2ħ
Huh? Why the factor 2? Well… Each of the two Δ ranges corresponds to 2σ (hence, σ= Δx/2 and σ= Δp/2), and so we have (1/2)Δx·(1/2)Δp ≥ ħ/2. Note that if we would equate our Δ with 3σ to get 97.7% of the values, instead of 95.4% only, once again assuming that Nature distributes all relevant properties normally (not sure – especially in this case, because we are talking discrete quanta of action here – so Nature may want to cut off the ‘tail ends’!), then we’d get Δx·Δp ≥ 4.5×ħ: the cost of extra precision soars! Also note that, if we would equate Δ with σ (the one-sigma rule corresponds to 68.3% of a normally distributed range of values), then we get yet another ‘version’ of the uncertainty principle: Δx·Δp ≥ ħ/2. Pick and choose! And if we want to be purists, we should note that ħ is used when we express things in radians (such as the angular frequency for example: E = ħω), so we should actually use h when we are talking distance and (linear) momentum. The equation above then becomes Δx·Δp ≥ h/π.
It doesn’t matter all that much. The point to note is that, if we express x and p in regular distance and momentum units (m and kg·m/s), then the unit for ħ (or h) is 1×10–34. Now, we can sort of choose how to spread the uncertainty over x and p. If we spread it evenly, then we’ll measure both Δx and Δp in units of 1×10–17 m and 1×10–17 kg·m/s. That’s small… but not that small. In fact, it is (more or less) imaginably small I’d say.
For example, a photon of a blue-violet light (let’s say a wavelength of around 660 nanometer) would have a momentum p = h/λ equal to some 1×10–22 kg·m/s (just work it out using the values for h and λ). You would usually see this value measured in a unit that’s more appropriate to the atomic scale: 6.25 eV/c. [Converting momentum into energy using E = pc, and using the Joule-electronvolt conversion (1 eV ≈ 1.6×10–19 J) will get you there.] Hence, units of 1×10–17 m for momentum are a hundred thousand times the rather average momentum of our light photon. We can’t have that so let’s reduce the uncertainty related to the momentum to that 1×10–22 kg·m/s scale. Then the uncertainty about position will be measured in units of 1×10–12 m. That’s the picometer scale in-between the nanometer (1×10–9 m) and the femtometer (1×10–9 m) scale. You’ll remember that this scale corresponds to the resolution of a (modern) electron microscope (50 pm). So can we see “uncertainty effects” ? Yes. I’ll come back to that.
However, before I discuss these, I need to make a little digression. Despite the sub-title I am using above, the uncertainties in distance and momentum we are discussing here are nowhere near to what is referred to as the Planck scale in physics: the Planck scale is at the other side of that Great Desert I mentioned: the Large Hadron Collider, which smashes particles with (average) energies of 4 tera-electronvolt (i.e. 4 trillion eV – all packed into one particle !) is probing stuff measuring at a scale of a thousandth of a femtometer (0.001×10–12 m), but we’re obviously at the limits of what’s technically possible, and so that’s where the Great Desert starts. The ‘other side’ of that Great Desert is the Planck scale: 10–35 m. Now, why is that some kind of theoretical limit? Why can’t we just continue to further cut these scales down? Just like Dedekind did when defining irrational numbers? We can surely get infinitely close to zero, can we? Well… No. The reasoning is quite complex (and I am not sure if I actually understand it – the way I should) but it is quite relevant to the topic here (the relation between energy and size), and it goes something like this:
1. In quantum mechanics, particles are considered to be point-like but they do take space, as evidenced from our discussion on slit widths: light will show diffraction at the micro-scale (10–6 m) but electrons will do that only at the nano-scale (10–9 m), so that’s a thousand times smaller. That’s related to their respective the de Broglie wavelength which, for electrons, is also a thousand times smaller than that of electrons. Now, the de Broglie wavelength is related to the energy and/or the momentum of these particles: E = hf and p = h/λ.
2. Higher energies correspond to smaller de Broglie wavelengths and, hence, are associated with particles of smaller size. To continue the example, the energy formula to be used in the E = hf relation for an electron – or any particle with rest mass – is the (relativistic) mass-energy equivalence relation: E = γm0c2, with γ the Lorentz factor, which depends on the velocity v of the particle. For example, electrons moving at more or less normal speeds (like in the 2012 experiment, or those used in an electron microscope) have typical energy levels of some 600 eV, and don’t think that’s a lot: the electrons from that cathode ray tube in the back of an old-fashioned TV which lighted up the screen so you could watch it, had energies in the 20,000 eV range. So, for electrons, we are talking energy levels a thousand or a hundred thousand higher than for your typical 2 to 10 eV photon.
3. Of course, I am not talking X or gamma rays here: hard X rays also have energies of 10 to 100 kilo-electronvolt, and gamma ray energies range from 1 million to 10 million eV (1-10 MeV). In any case, the point to note is ‘small’ particles must have high energies, and I am not only talking massless particles such as photons. Indeed, in my post End of the Road to Reality?, I discussed the scale of a proton and the scale of quarks: 1.7 and 0.7 femtometer respectively, which is smaller than the so-called classical electron radius. So we have (much) heavier particles here that are smaller? Indeed, the rest mass of the u and d quarks that make up a proton (uud) is 2.4 and 4.8 MeV/c2 respectively, while the (theoretical) rest mass of an electron is 0.511 Mev/conly, so that’s almost 20 times more: (2.4+2.4+4.8/0.5). Well… No. The rest mass of a proton is actually 1835 times the rest mass of an electron: the difference between the added rest masses of the quarks that make it up and the rest mass of the proton itself (938 MeV//c2) is the equivalent mass of the strong force that keeps the quarks together.
4. But let me not complicate things. Just note that there seems to be a strange relationship between the energy and the size of a particle: high-energy particles are supposed to be smaller, and vice versa: smaller particles are associated with higher energy levels. If we accept this as some kind of ‘factual reality’, then we may understand what the Planck scale is all about: : the energy levels associated with theoretical ‘particles’ of the above-mentioned Planck scale (i.e. particles with a size in the 10–35 m range) would have energy levels in the 1019 GeV range. So what? Well… This amount of energy corresponds to an equivalent mass density of a black hole. So any ‘particle’ we’d associate with the Planck length would not make sense as a physical entity: it’s the scale where gravity takes over – everything.
Again: so what? Well… I don’t know. It’s just that this is entirely new territory, and it’s also not the topic of my post here. So let me just quote Wikipedia on this and then move on: “The fundamental limit for a photon’s energy is the Planck energy [that’s the 1019 GeV which I mentioned above: to be precise, that ‘limit energy’ is said to be 1.22 × 1019 GeV], for the reasons cited above [that ‘photon’ would not be ‘photon’ but a black hole, sucking up everything around it]. This makes the Planck scale a fascinating realm for speculation by theoretical physicists from various schools of thought. Is the Planck scale domain a seething mass of virtual black holes? Is it a fabric of unimaginably fine loops or a spin foam network? Is it interpenetrated by innumerable Calabi-Yau manifolds which connect our 3-dimensional universe with a higher-dimensional space? [That’s what’s string theory is about.] Perhaps our 3-D universe is ‘sitting’ on a ‘brane’ which separates it from a 2, 5, or 10-dimensional universe and this accounts for the apparent ‘weakness’ of gravity in ours. These approaches, among several others, are being considered to gain insight into Planck scale dynamics. This would allow physicists to create a unified description of all the fundamental forces. [That’s what’s these Grand Unification Theories (GUTs) are about.]
Hmm… I wish I could find some easy explanation of why higher energy means smaller size. I do note there’s an easy relationship between energy and momentum for massless particles traveling at the velocity of light (like photons): E = p(or p = E/c), but – from what I write above – it is obvious that it’s the spread in momentum (and, therefore, in wave numbers) which determines how short or how long our wave train is, not the energy level as such. I guess I’ll just have to do some more research here and, hopefully, get back to you when I understand things better.
Re-visiting the Uncertainty Principle
You will probably have read countless accounts of the double-slit experiment, and so you will probably remember that these thought or actual experiments also try to watch the electrons as they pass the slits – with disastrous results: the interference pattern disappears. I copy Feynman’s own drawing from his 1965 Lecture on Quantum Behavior below: a light source is placed behind the ‘wall’, right between the two slits. Now, light (i.e. photons) gets scattered when it hits electrons and so now we should ‘see’ through which slit the electron is coming. Indeed, remember that we sent them through these slits one by one, and we still had interference – suggesting the ‘electron wave’ somehow goes through both slits at the same time, which can’t be true – because an electron is a particle.
Watching the electrons
However, let’s re-examine what happens exactly.
1. We can only detect all electrons if the light is high intensity, and high intensity does not mean higher energy photons but more photons. Indeed, if the light source is deem, then electrons might get through without being seen. So a high-intensity light source allows us to see all electrons but – as demonstrated not only in thought experiments but also in the laboratory – it destroys the interference pattern.
2. What if we use lower-energy photons, like infrared light with wavelengths of 10 to 100 microns instead of visible light? We can then use thermal imaging night vision goggles to ‘see’ the electrons. 🙂 And if that doesn’t work, we can use radiowaves (or perhaps radar!). The problem – as Feynman explains it – is that such low frequency light (associated with long wavelengths) only give a ‘big fuzzy flash’ when the light is scattered: “We can no longer tell which hole the electron went through! We just know it went somewhere!” At the same time, “the jolts given to the electron are now small enough so that we begin to see some interference effect again.” Indeed: “For wavelengths much longer than the separation between the two slits (when we have no chance at all of telling where the electron went), we find that the disturbance due to the light gets sufficiently small that we again get the interference curve P12.” [P12 is the curve describing the original interference effect.]
Now, that would suggest that, when push comes to shove, the Uncertainty Principle only describes some indeterminacy in the so-called Compton scattering of a photon by an electron. This Compton scattering is illustrated below: it’s a more or less elastic collision between a photon and electron, in which momentum gets exchanged (especially the direction of the momentum) and – quite important – the wavelength of the scattered light is different from the incident radiation. Hence, the photon loses some energy to the electron and, because it will still travel at speed c, that means its wavelength must increase as prescribed by the λ = h/p de Broglie relation (with p = E/c for a photon). The change in the wavelength is called the Compton shift. and its formula is given in the illustration: it depends on the (rest) mass of the electron obviously and on the change in the direction of the momentum (of the photon – but that change in direction will obviously also be related to the recoil direction of the electron).
Compton scattering
This is a very physical interpretation of the Uncertainty Principle, but it’s the one which the great Richard P. Feynman himself stuck to in 1965, i.e. when he wrote his famous Lectures on Physics at the height of his career. Let me quote his interpretation of the Uncertainty Principle in full indeed:
“It is impossible to design an apparatus to determine which hole the electron passes through, that will not at the same time disturb the electrons enough to destroy the interference pattern. If an apparatus is capable of determining which hole the electron goes through, it cannot be so delicate that it does not disturb the pattern in an essential way. No one has ever found (or even thought of) a way around this. So we must assume that it describes a basic characteristic of nature.”
That’s very mechanistic indeed, and it points to indeterminacy rather than ontological uncertainty. However, there’s weirder stuff than electrons being ‘disturbed’ in some kind of random way by the photons we use to detect them, with the randomness only being related to us not knowing at what time photons leave our light source, and what energy or momentum they have exactly. That’s just ‘indeterminacy’ indeed; not some fundamental ‘uncertainty’ about Nature.
We see such ‘weirder stuff’ in those mega- and now tera-electronvolt experiments in particle accelerators. In 1965, Feynman had access to the results of the high-energy positron-electron collisions being observed in the 3 km long Stanford Linear Accelerator (SLAC), which started working in 1961, but stuff like quarks and all that was discovered only in the late 1960s and early 1970s, so that’s after Feynman’s Lectures on Physics.So let me just mention a rather remarkable example of the Uncertainty Principle at work which Feynman quotes in his 1985 Alix G. Mautner Memorial Lectures on Quantum Electrodynamics.
In the Feynman diagram below, we see a photon disintegrating, at time t = T3into a positron and an electron. The positron (a positron is an electron with positive charge basically: it’s the electron’s anti-matter counterpart) meets another electron that ‘happens’ to be nearby and the annihilation results in (another) high-energy photon being emitted. While, as Feynman underlines, “this is a sequence of events which has been observed in the laboratory”, how is all this possible? We create matter – an electron and a positron both have considerable mass – out of nothing here ! [Well… OK – there’s a photon, so that’s some energy to work with…]
Photon disintegrationFeynman explains this weird observation without reference to the Uncertainty Principle. He just notes that “Every particle in Nature has an amplitude to move backwards in time, and therefore has an anti-particle.” And so that’s what this electron coming from the bottom-left corner does: it emits a photon and then the electron moves backwards in time. So, while we see a (very short-lived) positron moving forward, it’s actually an electron quickly traveling back in time according to Feynman! And, after a short while, it has had enough of going back in time, so then it absorbs a photon and continues in a slightly different direction. Hmm… If this does not sound fishy to you, it does to me.
The more standard explanation is in terms of the Uncertainty Principle applied to energy and time. Indeed, I mentioned that we have several pairs of conjugate variables in quantum mechanics: position and momentum are one such pair (related through the de Broglie relation p =ħk), but energy and time are another (related through the other de Broglie relation E = hf = ħω). While the ‘energy-time uncertainty principle’ – ΔE·Δt ≥ ħ/2 – resembles the position-momentum relationship above, it is apparently used for ‘very short-lived products’ produced in high-energy collisions in accelerators only. I must assume the short-lived positron in the Feynman diagram is such an example: there is some kind of borrowing of energy (remember mass is equivalent to energy) against time, and then normalcy soon gets restored. Now THAT is something else than indeterminacy I’d say.
uncertainty principle energy time
But so Feynman would say both interpretations are equivalent, because Nature doesn’t care about our interpretations.
What to say in conclusion? I don’t know. I obviously have some more work to do before I’ll be able to claim to understand the uncertainty principle – or quantum mechanics in general – somewhat. I think the next step is to solve my problem with the summary ‘not enough arrows’ explanation, which is – evidently – linked to the relation between energy and size of particles. That’s the one loose end I really need to tie up I feel ! I’ll keep you posted !
Light and matter
In my previous post, I discussed the de Broglie wave of a photon. It’s usually referred to as ‘the’ wave function (or the psi function) but, as I explained, for every psi – i.e. the position-space wave function Ψ(x ,t) – there is also a phi – i.e. the momentum-space wave function Φ(p, t).
In that post, I also compared it – without much formalism – to the de Broglie wave of ‘matter particles’. Indeed, in physics, we look at ‘stuff’ as being made of particles and, while the taxonomy of the particle zoo of the Standard Model of physics is rather complicated, one ‘taxonomic’ principle stands out: particles are either matter particles (known as fermions) or force carriers (known as bosons). It’s a strict separation: either/or. No split personalities.
A quick overview before we start…
Wikipedia’s overview of particles in the Standard Model (including the latest addition: the Higgs boson) illustrates this fundamental dichotomy in nature: we have the matter particles (quarks and leptons) on one side, and the bosons (i.e. the force carriers) on the other side.
Don’t be put off by my remark on the particle zoo: it’s a term coined in the 1960s, when the situation was quite confusing indeed (like more than 400 ‘particles’). However, the picture is quite orderly now. In fact, the Standard Model put an end to the discovery of ‘new’ particles, and it’s been stable since the 1970s, as experiments confirmed the reality of quarks. Indeed, all resistance to Gell-Man’s quarks and his flavor and color concepts – which are just words to describe new types of ‘charge’ – similar to electric charge but with more variety), ended when experiments by Stanford’s Linear Accelerator Laboratory (SLAC) in November 1974 confirmed the existence of the (second-generation and, hence, heavy and unstable) ‘charm’ quark (again, the names suggest some frivolity but it’s serious physical research).
As for the Higgs boson, its existence of the Higgs boson had also been predicted, since 1964 to be precise, but it took fifty years to confirm it experimentally because only something like the Large Hadron Collider could produce the required energy to find it in these particle smashing experiments – a rather crude way of analyzing matter, you may think, but so be it. [In case you harbor doubts on the Higgs particle, please note that, while CERN is the first to admit further confirmation is needed, the Nobel Prize Committee apparently found the evidence ‘evidence enough’ to finally award Higgs and others a Nobel Prize for their ‘discovery’ fifty years ago – and, as you know, the Nobel Prize committee members are usually rather conservative in their judgment. So you would have to come up with a rather complex conspiracy theory to deny its existence.]
Also note that the particle zoo is actually less complicated than it looks at first sight: the (composite) particles that are stable in our world – this world – consist of three quarks only: a proton consists of two up quarks and one down quark and, hence, is written as uud., and a neutron is two down quarks and one up quark: udd. Hence, for all practical purposes (i.e. for our discussion how light interacts with matter), only the so-called first generation of matter-particles – so that’s the first column in the overview above – are relevant.
All the particles in the second and third column are unstable. That being said, they survive long enough – a muon disintegrates after 2.2 millionths of a second (on average) – to deserve the ‘particle’ title, as opposed to a ‘resonance’, whose lifetime can be as short as a billionth of a trillionth of a second – but we’ve gone through these numbers before and so I won’t repeat that here. Why do we need them? Well… We don’t, but they are a by-product of our world view (i.e. the Standard Model) and, for some reason, we find everything what this Standard Model says should exist, even if most of the stuff (all second- and third-generation matter particles, and all these resonances, vanish rather quickly – but so that also seems to be consistent with the model). [As for a possible fourth (or higher) generation, Feynman didn’t exclude it when he wrote his 1985 Lectures on quantum electrodynamics, but, checking on Wikipedia, I find the following: “According to the results of the statistical analysis by researchers from CERN and the Humboldt University of Berlin, the existence of further fermions can be excluded with a probability of 99.99999% (5.3 sigma).” If you want to know why… Well… Read the rest of the Wikipedia article. It’s got to do with the Higgs particle.]
As for the (first-generation) neutrino in the table – the only one which you may not be familiar with – these are very spooky things but – I don’t want to scare you – relatively high-energy neutrinos are going through your and my my body, right now and here, at a rate of some hundred trillion per second. They are produced by stars (stars are huge nuclear fusion reactors, remember?), and also as a by-product of these high-energy collisions in particle accelerators of course. But they are very hard to detect: the first trace of their existence was found in 1956 only – 26 years after their existence had been postulated: the fact that Wolfgang Pauli proposed their existence in 1930 to explain how beta decay could conserve energy, momentum and spin (angular momentum) demonstrates not only the genius but also the confidence of these early theoretical quantum physicists. Most neutrinos passing through Earth are produced by our Sun. Now they are being analyzed more routinely. The largest neutrino detector on Earth is called IceCube. It sits on the South Pole – or under it, as it’s suspended under the Antarctic ice, and it regularly captures high-energy neutrinos in the range of 1 to 10 TeV.
Let me – to conclude this introduction – just quickly list and explain the bosons (i.e the force carriers) in the table above:
1. Of all of the bosons, the photon (i.e. the topic of this post), is the most straightforward: there is only type of photon, even if it comes in different possible states of polarization.
I should probably do a quick note on polarization here – even if all of the stuff that follows will make abstraction of it. Indeed, the discussion on photons that follows (largely adapted from Feynman’s 1985 Lectures on Quantum Electrodynamics) assumes that there is no such thing as polarization – because it would make everything even more complicated. The concept of polarization (linear, circular or elliptical) has a direct physical interpretation in classical mechanics (i.e. light as an electromagnetic wave). In quantum mechanics, however, polarization becomes a so-called qubit (quantum bit): leaving aside so-called virtual photons (these are short-range disturbances going between a proton and an electron in an atom – effectively mediating the electromagnetic force between them), the property of polarization comes in two basis states (0 and 1, or left and right), but these two basis states can be superposed. In ket notation: if ¦0〉 and ¦1〉 are the basis states, then any linear combination α·¦0〉 + ß·¦1〉 is also a valid state provided│α│2 + │β│= 1, in line with the need to get probabilities that add up to one.
In case you wonder why I am introducing these kets, there is no reason for it, except that I will be introducing some other tools in this post – such as Feynman diagrams – and so that’s all. In order to wrap this up, I need to note that kets are used in conjunction with bras. So we have a bra-ket notation: the ket gives the starting condition, and the bra – denoted as 〈 ¦ – gives the final condition. They are combined in statements such as 〈 particle arrives at x¦particle leaves from s〉 or – in short – 〈 x¦s〉 and, while x and s would have some real-number value, 〈 x¦s〉 would denote the (complex-valued) probability amplitude associated wit the event consisting of these two conditions (i.e the starting and final condition).
But don’t worry about it. This digression is just what it is: a digression. Oh… Just make a mental note that the so-called virtual photons (the mediators that are supposed to keep the electron in touch with the proton) have four possible states of polarization – instead of two. They are related to the four directions of space (x, y and z) and time (t). 🙂
2. Gluons, the exchange particles for the strong force, are more complicated: they come in eight so-called colors. In practice, one should think of these colors as different charges, but so we have more elementary charges in this case than just plus or minus one (±1) – as we have for the electric charge. So it’s just another type of qubit in quantum mechanics.
[Note that the so-called elementary ±1 values for electric charge are not really elementary: it’s –1/3 (for the down quark, and for the second- and third-generation strange and bottom quarks as well) and +2/3 (for the up quark as well as for the second- and third-generation charm and top quarks). That being said, electric charge takes two values only, and the ±1 value is easily found from a linear combination of the –1/3 and +2/3 values.]
3. Z and W bosons carry the so-called weak force, aka as Fermi’s interaction: they explain how one type of quark can change into another, thereby explaining phenomena such as beta decay. Beta decay explains why carbon-14 will, after a very long time (as compared to the ‘unstable’ particles mentioned above), spontaneously decay into nitrogen-14. Indeed, carbon-12 is the (very) stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ (so one can’t call carbon-12 ‘unstable’: perhaps ‘less stable’ will do) and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). As for the name, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) as well as β+ decay (e.g. magnesium-23 into sodium-23). There’s also alpha and gamma decay but that involves different things.
As you can see from the table, W± and Zbosons are very heavy (157,000 and 178,000 times heavier than a electron!), and W± carry the (positive or negative) electric charge. So why don’t we see them? Well… They are so short-lived that we can only see a tiny decay width, just a very tiny little trace, so they resemble resonances in experiments. That’s also the reason why we see little or nothing of the weak force in real-life: the force-carrying particles mediating this force don’t get anywhere.
4. Finally, as mentioned above, the Higgs particle – and, hence, of the associated Higgs field – had been predicted since 1964 already but its existence was only (tentatively) experimentally confirmed last year. The Higgs field gives fermions, and also the W and Z bosons, mass (but not photons and gluons), and – as mentioned above – that’s why the weak force has such short range as compared to the electromagnetic and strong forces. Note, however, that the Higgs particle does actually not explain the gravitational force, so it’s not the (theoretical) graviton and there is no quantum field theory for the gravitational force as yet. Just Google it and you’ll quickly find out why: there’s theoretical as well as practical (experimental) reasons for that.
The Higgs field stands out from the other force fields because it’s a scalar field (as opposed to a vector field). However, I have no idea how this so-called Higgs mechanism (i.e. the interaction with matter particles (i.e. with the quarks and leptons, but not directly with neutrinos it would seem from the diagram below), with W and Z bosons, and with itself – but not with the massless photons and gluons) actually works. But then I still have a very long way to go on this Road to Reality.
In any case… The topic of this post is to discuss light and its interaction with matter – not the weak or strong force, nor the Higgs field.
Let’s go for it.
Amplitudes, probabilities and observable properties
Being born a boson or a fermion makes a big difference. That being said, both fermions and bosons are wavicles described by a complex-valued psi function, colloquially known as the wave function. To be precise, there will be several wave functions, and the square of their modulus (sorry for the jargon) will give you the probability of some observable property having a value in some relevant range, usually denoted by Δ. [I also explained (in my post on Bose and Fermi) how the rules for combining amplitudes differ for bosons versus fermions, and how that explains why they are what they are: matter particles occupy space, while photons not only can but also like to crowd together in, for example, a powerful laser beam. I’ll come back on that.]
For all practical purposes, relevant usually means ‘small enough to be meaningful’. For example, we may want to calculate the probability of detecting an electron in some tiny spacetime interval (Δx, Δt). [Again, ‘tiny’ in this context means small enough to be relevant: if we are looking at a hydrogen atom (whose size is a few nanometer), then Δx is likely to be a cube or a sphere with an edge or a radius of a few picometer only (a picometer is a thousandth of a nanometer, so it’s a millionth of a millionth of a meter); and, noting that the electron’s speed is approximately 2200 km per second… Well… I will let you calculate a relevant Δt. :-)]
If we want to do that, then we will need to square the modulus of the corresponding wave function Ψ(x, t). To be precise, we will have to do a summation of all the values │Ψ(x, t)│over the interval and, because x and t are real (and, hence, continuous) numbers, that means doing some integral (because an integral is the continuous version of a sum).
But that’s only one example of an observable property: position. There are others. For example, we may not be interested in the particle’s exact position but only in its momentum or energy. Well, we have another wave function for that: the momentum wave function Φ(x ,t). In fact, if you looked at my previous posts, you’ll remember the two are related because they are conjugate variables: Fourier transforms duals of one another. A less formal way of expressing that is to refer to the uncertainty principle. But this is not the time to repeat things.
The bottom line is that all particles travel through spacetime with a backpack full of complex-valued wave functions. We don’t know who and where these particles are exactly, and so we can’t talk to them – but we can e-mail God and He’ll send us the wave function that we need to calculate some probability we are interested in because we want to check – in all kinds of experiments designed to fool them – if it matches with reality.
As mentioned above, I highlighted the main difference between bosons and fermions in my Bose and Fermi post, so I won’t repeat that here. Just note that, when it comes to working with those probability amplitudes (that’s just another word for these psi and phi functions), it makes a huge difference: fermions and bosons interact very differently. Bosons are party particles: they like to crowd and will always welcome an extra one. Fermions, on the other hand, will exclude each other: that’s why there’s something referred to as the Fermi exclusion principle in quantum mechanics. That’s why fermions make matter (matter needs space) and bosons are force carriers (they’ll just call friends to help when the load gets heavier).
Light versus matter: Quantum Electrodynamics
OK. Let’s get down to business. This post is about light, or about light-matter interaction. Indeed, in my previous post (on Light), I promised to say something about the amplitude of a photon to go from point A to B (because – as I wrote in my previous post – that’s more ‘relevant’, when it comes to explaining stuff, than the amplitude of a photon to actually be at point x at time t), and so that’s what I will do now.
In his 1985 Lectures on Quantum Electrodynamics (which are lectures for the lay audience), Feynman writes the amplitude of a photon to go from point A to B as P(A to B) – and the P stands for photon obviously, not for probability. [I am tired of repeating that you need to square the modulus of an amplitude to get a probability but – here you are – I have said it once more.] That’s in line with the other fundamental wave function in quantum electrodynamics (QED): the amplitude of an electron to go from A to B, which is written as E(A to B). [You got it: E just stands for electron, not for our electric field vector.]
I also talked about the third fundamental amplitude in my previous post: the amplitude of an electron to absorb or emit a photon. So let’s have a look at these three. As Feynman says: ““Out of these three amplitudes, we can make the whole world, aside from what goes on in nuclei, and gravitation, as always!”
Well… Thank you, Mr Feynman: I’ve always wanted to understand the World (especially if you made it).
The photon-electron coupling constant j
Let’s start with the last of those three amplitudes (or wave functions): the amplitude of an electron to absorb or emit a photon. Indeed, absorbing or emitting makes no difference: we have the same complex number for both. It’s a constant – denoted by j (for junction number) – equal to –0.1 (a bit less actually but it’s good enough as an approximation in the context of this blog).
Huh? Minus 0.1? That’s not a complex number, is it? It is. Real numbers are complex numbers too: –0.1 is 0.1eiπ in polar coordinates. As Feynman puts it: it’s “a shrink to about one-tenth, and half a turn.” The ‘shrink’ is the 0.1 magnitude of this vector (or arrow), and the ‘half-turn’ is the angle of π (i.e. 180 degrees). He obviously refers to multiplying (no adding here) j with other amplitudes, e.g. P(A, C) and E(B, C) if the coupling is to happen at or near C. And, as you’ll remember, multiplying complex numbers amounts to adding their phases, and multiplying their modulus (so that’s adding the angles and multiplying lengths).
Let’s introduce a Feynman diagram at this point – drawn by Feynman himself – which shows three possible ways of two electrons exchanging a photon. We actually have two couplings here, and so the combined amplitude will involve two j‘s. In fact, if we label the starting point of the two lines representing our electrons as 1 and 2 respectively, and their end points as 3 and 4, then the amplitude for these events will be given by:
E(1 to 5)·j·E(5 to 3)·E(2 to 6)·j·E(6 to 3)
As for how that j factor works, please do read the caption of the illustration below: the same j describes both emission as well as absorption. It’s just that we have both an emission as well as an as absorption here, so we have a j2 factor here, which is less than 0.1·0.1 = 0.01. At this point, it’s worth noting that it’s obvious that the amplitudes we’re talking about here – i.e. for one possible way of an exchange like the one below happening – are very tiny. They only become significant when we add many of these amplitudes, which – as explained below – is what has to happen: one has to consider all possible paths, calculate the amplitudes for them (through multiplication), and then add all these amplitudes, to then – finally – square the modulus of the combined ‘arrow’ (or amplitude) to get some probability of something actually happening. [Again, that’s the best we can do: calculate probabilities that correspond to experimentally measured occurrences. We cannot predict anything in the classical sense of the word.]
Feynman diagram of photon-electron coupling
A Feynman diagram is not just some sketchy drawing. For example, we have to care about scales: the distance and time units are equivalent (so distance would be measured in light-seconds or, else, time would be measured in units equivalent to the time needed for light to travel one meter). Hence, particles traveling through time (and space) – from the bottom of the graph to the top – will usually not be traveling at an angle of more than 45 degrees (as measured from the time axis) but, from the graph above, it is clear that photons do. [Note that electrons moving through spacetime are represented by plain straight lines, while photons are represented by wavy lines. It’s just a matter of convention.]
More importantly, a Feynman diagram is a pictorial device showing what needs to be calculated and how. Indeed, with all the complexities involved, it is easy to lose track of what should be added and what should be multiplied, especially when it comes to much more complicated situations like the one described above (e.g. making sense of a scattering event). So, while the coupling constant j (aka as the ‘charge’ of a particle – but it’s obviously not the electric charge) is just a number, calculating an actual E(A to B) amplitudes is not easy – not only because there are many different possible routes (paths) but because (almost) anything can happen. Let’s have a closer look at it.
E(A to B)
As Feynman explains in his 1985 QED Lectures: “E(A to B) can be represented as a giant sum of a lot of different ways an electron can go from point A to B in spacetime: the electron can take a ‘one-hop flight’, going directly from point A to B; it could take a ‘two-hop flight’, stopping at an intermediate point C; it could take a ‘three-hop flight’ stopping at points D and E, and so on.”
Fortunately, the calculation re-uses known values: the amplitude for each ‘hop’ – from C to D, for example – is P(F to G) – so that’s the amplitude of a photon (!) to go from F to G – even if we are talking an electron here. But there’s a difference: we also have to multiply the amplitudes for each ‘hop’ with the amplitude for each ‘stop’, and that’s represented by another number – not j but n2. So we have an infinite series of terms for E(A to B): P(A to B) + P(A to C)·n2·P(C to B) + P(A to D)·n2·P(D to E)·n2·P(E to B) + … for all possible intermediate points C, D, E, and so on, as per the illustration below.
E(A to B)
You’ll immediately ask: what’s the value of n? It’s quite important to know it, because we want to know how big these n2netcetera terms are. I’ll be honest: I have not come to terms with that yet. According to Feynman (QED, p. 125), it is the ‘rest mass’ of an ‘ideal’ electron: an ‘ideal’ electron is an electron that doesn’t know Feynman’s amplitude theory and just goes from point to point in spacetime using only the direct path. 🙂 Hence, it’s not a probability amplitude like j: a proper probability amplitude will always have a modulus less than 1, and so when we see exponential terms like j2, j4,… we know we should not be all that worried – because these sort of vanish (go to zero) for sufficiently large exponents. For E(A to B), we do not have such vanishing terms. I will not dwell on this right here, but I promise to discuss it in the Post Scriptum of this post. The frightening possibility is that n might be a number larger than one.
[As we’re freewheeling a bit anyway here, just a quick note on conventions: I should not be writing j in bold-face, because it’s a (complex- or real-valued) number and symbols representing numbers are usually not written in bold-face: vectors are written in bold-face. So, while you can look at a complex number as a vector, well… It’s just one of these inconsistencies I guess. The problem with using bold-face letters to represent complex numbers (like amplitudes) is that they suggest that the ‘dot’ in a product (e.g. j·j) is an actual dot project (aka as a scalar product or an inner product) of two vectors. That’s not the case. We’re multiplying complex numbers here, and so we’re just using the standard definition of a product of complex numbers. This subtlety probably explains why Feynman prefers to write the above product as P(A to B) + P(A to C)*n2*P(C to B) + P(A to D)*n2*P(D to E)*n2*P(E to B) + … But then I find that using that asterisk to represent multiplication is a bit funny (although it’s a pretty common thing in complex math) and so I am not using it. Just be aware that a dot in a product may not always mean the same type of multiplication: multiplying complex numbers and multiplying vectors is not the same. […] And I won’t write j in bold-face anymore.]
P(A to B)
Regardless of the value for n, it’s obvious we need a functional form for P(A to B), because that’s the other thing (other than n) that we need to calculate E(A to B). So what’s the amplitude of a photon to go from point A to B?
Well… The function describing P(A to B) is obviously some wave function – so that’s a complex-valued function of x and t. It’s referred to as a (Feynman) propagator: a propagator function gives the probability amplitude for a particle to travel from one place to another in a given time, or to travel with a certain energy and momentum. [So our function for E(A to B) will be a propagator as well.] You can check out the details on it on Wikipedia. Indeed, I could insert the formula here, but believe me if I say it would only confuse you. The points to note is that:
1. The propagator is also derived from the wave equation describing the system, so that’s some kind of differential equation which incorporates the relevant rules and constraints that apply to the system. For electrons, that’s the Schrödinger equation I presented in my previous post. For photons… Well… As I mentioned in my previous post, there is ‘something similar’ for photons – there must be – but I have not seen anything that’s equally ‘simple’ as the Schrödinger equation for photons. [I have Googled a bit but it’s obvious we’re talking pretty advanced quantum mechanics here – so it’s not the QM-101 course that I am currently trying to make sense of.]
2. The most important thing (in this context at least) is that the key variable in this propagator (i.e. the Feynman propagator for the photon) is I: that spacetime interval which I mentioned in my previous post already:
I = Δr– Δt2 = (z2– z1)+ (y2– y1)+ (x2– x1)– (t2– t1)2
In this equation, we need to measure the time and spatial distance between two points in spacetime in equivalent units (these ‘points’ are usually referred to as four-vectors), so we’d use light-seconds for the unit of distance or, for the unit of time, the time it takes for light to travel one meter. [If we don’t want to transform time or distance scales, then we have to write I as I = c2Δt2 – Δr2.] Now, there are three types of intervals:
1. For time-like intervals, we have a negative value for I, so Δt> Δr2. For two events separated by a time-like interval, enough time passes between them so there could be a cause–effect relationship between the two events. In a Feynman diagram, the angle between the time axis and the line between the two events will be less than 45 degrees from the vertical axis. The traveling electrons in the Feynman diagrams above are an example.
2. For space-like intervals, we have a positive value for I, so Δt< Δr2. Events separated by space-like intervals cannot possibly be causally connected. The photons traveling between point 5 and 6 in the first Feynman diagram are an example, but then photons do have amplitudes to travel faster than light.
3. Finally, for light-like intervals, I = 0, or Δt2 = Δr2. The points connected by the 45-degree lines in the illustration below (which Feynman uses to introduce his Feynman diagrams) are an example of points connected by light-like intervals.
[Note that we are using the so-called space-like convention (+++–) here for I. There’s also a time-like convention, i.e. with +––– as signs: I = Δt2 – Δrso just check when you would consult other sources on this (which I recommend) and if you’d feel I am not getting the signs right.]
Spacetime intervalsNow, what’s the relevance of this? To calculate P(A to B), we have to add the amplitudes for all possible paths that the photon can take, and not in space, but in spacetime. So we should add all these vectors (or ‘arrows’ as Feynman calls them) – an infinite number of them really. In the meanwhile, you know it amounts to adding complex numbers, and that infinite sums are done by doing integrals, but let’s take a step back: how are vectors added?
Well…That’s easy, you’ll say… It’s the parallelogram rule… Well… Yes. And no. Let me take a step back here to show how adding a whole range of similar amplitudes works.
The illustration below shows a bunch of photons – real or imagined – from a source above a water surface (the sun for example), all taking different paths to arrive at a detector under the water (let’s say some fish looking at the sky from under the water). In this case, we make abstraction of all the photons leaving at different times and so we only look at a bunch that’s leaving at the same point in time. In other words, their stopwatches will be synchronized (i.e. there is no phase shift term in the phase of their wave function) – let’s say at 12 o’clock when they leave the source. [If you think this simplification is not acceptable, well… Think again.]
When these stopwatches hit the retina of our poor fish’s eye (I feel we should put a detector there, instead of a fish), they will stop, and the hand of each stopwatch represents an amplitude: it has a modulus (its length) – which is assumed to be the same because all paths are equally likely (this is one of the first principles of QED) – but their direction is very different. However, by now we are quite familiar with these operations: we add all the ‘arrows’ indeed (or vectors or amplitudes or complex numbers or whatever you want to call them) and get one big final arrow, shown at the bottom – just above the caption. Look at it very carefully.
adding arrows
If you look at the so-called contribution made by each of the individual arrows, you can see that it’s the arrows associated with the path of least time and the paths immediately left and right of it that make the biggest contribution to the final arrow. Why? Because these stopwatches arrive around the same time and, hence, their hands point more or less in the same direction. It doesn’t matter what direction – as long as it’s more or less the same.
[As for the calculation of the path of least time, that has to do with the fact that light is slowed down in water. Feynman shows why in his 1985 Lectures on QED, but I cannot possibly copy the whole book here ! The principle is illustrated below.] Least time principle
So, where are we? This digressions go on and on, don’t they? Let’s go back to the main story: we want to calculate P(A to B), remember?
As mentioned above, one of the first principles in QED is that all paths – in spacetime – are equally likely. So we need to add amplitudes for every possible path in spacetime using that Feynman propagator function. You can imagine that will be some kind of integral which you’ll never want to solve. Fortunately, Feynman’s disciples have done that for you already. The results is quite predictable: the grand result is that light has a tendency to travel in straight lines and at the speed of light.
WHAT!? Did Feynman get a Nobel prize for trivial stuff like that?
Yes. The math involved in adding amplitudes over all possible paths not only in space but also in time uses the so-called path integral formulation of quantum mechanics and so that’s got Feynman’s signature on it, and that’s the main reason why he got this award – together with Julian Schwinger and Sin-Itiro Tomonaga: both much less well known than Feynman, but so they shared the burden. Don’t complain about it. Just take a look at the ‘mechanics’ of it.
We already mentioned that the propagator has the spacetime interval I in its denominator. Now, the way it works is that, for values of I equal or close to zero, so the paths that are associated with light-like intervals, our propagator function will yield large contributions in the ‘same’ direction (wherever that direction is), but for the spacetime intervals that are very much time- or space-like, the magnitude of our amplitude will be smaller and – worse – our arrow will point in the ‘wrong’ direction. In short, the arrows associated with the time- and space-like intervals don’t add up to much, especially over longer distances. [When distances are short, there are (relatively) few arrows to add, and so the probability distribution will be flatter: in short, the likelihood of having the actual photon travel faster or slower than speed is higher.]
Contribution interval
Does this make sense? I am not sure, but I did what I promised to do. I told you how P(A to B) gets calculated; and from the formula for E(A to B), it is obvious that we can then also calculate E(A to B) provided we have a value for n. However, that value n is determined experimentally, just like the value of j, in order to ensure this amplitude theory yields probabilities that match the probabilities we observe in all kinds of crazy experiments that try to prove or disprove the theory; and then we can use these three amplitude formulas “to make the whole world”, as Feynman calls it, except the stuff that goes on inside of nuclei (because that’s the domain of the weak and strong nuclear force) and gravitation, for which we have a law (Newton’s Law) but no real ‘explanation’. [Now, you may wonder if this QED explanation of light is really all that good, but Mr Feynman thinks it is, and so I have no reason to doubt that – especially because there’s surely not anything more convincing lying around as far as I know.]
So what remains to be told? Lots of things, even within the realm of expertise of quantum electrodynamics. Indeed, Feynman applies the basics as described above to a number of real-life phenomena – quite interesting, all of it ! – but, once again, it’s not my goal to copy all of his Lectures here. [I am only hoping to offer some good summaries of key points in some attempt to convince myself that I am getting some of it at least.] And then there is the strong force, and the weak force, and the Higgs field, and so and so on. But that’s all very strange and new territory which I haven’t even started to explore. I’ll keep you posted as I am making my way towards it.
Post scriptum: On the values of j and n
In this post, I promised I would write something about how we can find j and n because I realize it would just amount to copy three of four pages out of that book I mentioned above, and which inspired most of this post. Let me just say something more about that remarkable book, and then quote a few lines on what the author of that book – the great Mr Feynman ! – thinks of the math behind calculating these two constants (the coupling constant j, and the ‘rest mass’ of an ‘ideal’ electron). Now, before I do that, I should repeat that he actually invented that math (it makes use of a mathematical approximation method called perturbation theory) and that he got a Nobel Prize for it.
First, about the book. Feynman’s 1985 Lectures on Quantum Electrodynamics are not like his 1965 Lectures on Physics. The Lectures on Physics are proper courses for undergraduate and even graduate students in physics. This little 1985 book on QED is just a series of four lectures for a lay audience, conceived in honor of Alix G. Mautner. She was a friend of Mr Feynman’s who died a few years before he gave and wrote these ‘lectures’ on QED. She had a degree in English literature and would ask Mr Feynman regularly to explain quantum mechanics and quantum electrodynamics in a way she would understand. While they had known each other for about 22 years, he had apparently never taken enough time to do so, as he writes in his Introduction to these Alix G. Mautner Memorial Lectures: “So here are the lectures I really [should have] prepared for Alix, but unfortunately I can’t tell them to her directly, now.”
The great Richard Phillips Feynman himself died only three years later, in February 1988 – not of one but two rare forms of cancer. He was only 69 years old when he died. I don’t know if he was aware of the cancer(s) that would kill him, but I find his fourth and last lecture in the book, Loose Ends, just fascinating. Here we have a brilliant mind deprecating the math that earned him a Nobel Prize and without which the Standard Model would be unintelligible. I won’t try to paraphrase him. Let me just quote him. [If you want to check the quotes, the relevant pages are page 125 to 131):
The math behind calculating these constants] is a “dippy process” and “having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent“. He adds: “It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization [“the shell game that we play to find n and j” as he calls it] is not mathematically legitimate.” […] Now, Mr Feynman writes this about quantum electrodynamics, not about “the rest of physics” (and so that’s quantum chromodynamics (QCD) – the theory of the strong interactions – and quantum flavordynamics (QFD) – the theory of weak interactions) which, he adds, “has not been checked anywhere near as well as electrodynamics.”
That’s a pretty damning statement, isn’t it? In one of my other posts (see: The End of the Road to Reality?), I explore these comments a bit. However, I have to admit I feel I really need to get back to math in order to appreciate these remarks. I’ve written way too much about physics anyway now (as opposed to the my first dozen of posts – which were much more math-oriented). So I’ll just have a look at some more stuff indeed (such as perturbation theory), and then I’ll get back blogging. Indeed, I’ve written like 20 posts or so in a few months only – so I guess I should shut up for while now !
In the meanwhile, you’re more than welcome to comment of course !
I started the two previous posts attempting to justify why we need all these mathematical formulas to understand stuff: because otherwise we just keep on repeating very simplistic but nonsensical things such as ‘matter behaves (sometimes) like light’, ‘light behaves (sometimes) like matter’ or, combining both, ‘light and matter behave like wavicles’. Indeed: what does ‘like‘ mean? Like the same but different? 🙂 However, I have not said much about light so far.
Light and matter are two very different things. For matter, we have quantum mechanics. For light, we have quantum electrodynamics (QED). However, QED is not only a quantum theory about light: as Feynman pointed out in his little but exquisite 1985 book on quantum electrodynamics (QED: The Strange Theory of Light and Matter), it is first and foremost a theory about how light interacts with matter. However, let’s limit ourselves here to light.
In classical physics, light is an electromagnetic wave: it just travels on and on and on because of that wonderful interaction between electric and magnetic fields. A changing electric field induces a magnetic field, the changing magnetic field then induces an electric field, and then the changing electric field induces a magnetic field, and… Well, you got the idea: it goes on and on and on. This wonderful machinery is summarized in Maxwell’s equations – and most beautifully so in the so-called Heaviside form of these equations, which assume a charge-free vacuum space (so there are no other charges lying around exerting a force on the electromagnetic wave or the (charged) particle whom’s behavior we want to study) and they also make abstraction of other complications such as electric currents (so there are no moving charges going around either).
I reproduced Heaviside’s Maxwell equations below as well as an animated gif which is supposed to illustrate the dynamics explained above. [In case you wonder who’s Heaviside? Well… Check it out: he was quite a character.] The animation is not all that great but OK enough. And don’t worry if you don’t understand the equations – just note the following:
1. The electric and magnetic field E and B are represented by perpendicular oscillating vectors.
2. The first and third equation (∇·E = 0 and ∇·B = 0) state that there are no static or moving charges around and, hence, they do not have any impact on (the flux of) E and B.
3. The second and fourth equation are the ones that are essential. Note the time derivatives (∂/∂t): E and B oscillate and perpetuate each other by inducing new circulation of B and E.
Heaviside form of Maxwell's equations
The constants μ and ε in the fourth equation are the so-called permeability (μ) and permittivity (ε) of the medium, and μ0 and ε0 are the values for these constants in a vacuum space. Now, it is interesting to note that με equals 1/c2, so a changing electric field only produces a tiny change in the circulation of the magnetic field. That’s got something to do with magnetism being a ‘relativistic’ effect but I won’t explore that here – except for noting that the final Lorentz force on a (charged) particle F = q(E + v×B) will be the same regardless of the reference frame (moving or inertial): the reference frame will determine the mixture of E and B fields, but there is only one combined force on a charged particle in the end, regardless of the reference frame (inertial or moving at whatever speed – relativistic (i.e. close to c) or not). [The forces F, E and B on a moving (charged) particle are shown below the animation of the electromagnetic wave.] In other words, Maxwell’s equations are compatible with both special as well as general relativity. In fact, Einstein observed that these equations ensure that electromagnetic waves always travel at speed c (to use his own words: “Light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body.”) and it’s this observation that led him to develop his special relativity theory.
The other interesting thing to note is that there is energy in these oscillating fields and, hence, in the electromagnetic wave. Hence, if the wave hits an impenetrable barrier, such as a paper sheet, it exerts pressure on it – known as radiation pressure. [By the way, did you ever wonder why a light beam can travel through glass but not through paper? Check it out!] A very oft-quoted example is the following: if the effects of the sun’s radiation pressure on the Viking spacecraft had been ignored, the spacecraft would have missed its Mars orbit by about 15,000 kilometers. Another common example is more science fiction-oriented: the (theoretical) possibility of space ships using huge sails driven by sunlight (paper sails obviously – one should not use transparent plastic for that).
I am mentioning radiation pressure because, although it is not that difficult to explain radiation pressure using classical electromagnetism (i.e. light as waves), the explanation provided by the ‘particle model’ of light is much more straightforward and, hence, a good starting point to discuss the particle nature of light:
1. Electromagnetic radiation is quantized in particles called photons. We know that because of Max Planck’s work on black body radiation, which led to Planck’s relation: E = hν. Photons are bona fide particles in the so-called Standard Model of physics: they are defined as bosons with spin 1, but zero rest mass and no electric charge (as opposed to W bosons). They are denoted by the letter or symbol γ (gamma), so that’s the same symbol that’s used to denote gamma rays. [Gamma rays are high-energy electromagnetic radiation (i.e. ‘light’) that have a very definite particle character. Indeed, because of their very short wavelength – less than 10 picometer (10×10–12 m) and high energy (hundreds of KeV – as opposed to visible light, which has a wavelength between 380 and 750 nanometer (380-750×10–9 m) and typical energy of 2 to 3 eV only (so a few hundred thousand times less), they are capable of penetrating through thick layers of concrete, and the human body – where they might damage intracellular bodies and create cancer (lead is a more efficient barrier obviously: a shield of a few centimeter of lead will stop most of them. In case you are not sure about the relation between energy and penetration depth, see the Post Scriptum.]
2. Although photons are considered to have zero rest mass, they have energy and, hence, an equivalent relativistic mass (m = E/c2) and, therefore, also momentum. Indeed, energy and momentum are related through the following (relativistic) formula: E = (p2c+ m02c4)1/2 (the non-relativistic version is simply E = p2/2m0 but – quite obviously – an approximation that cannot be used in this case – if only because the denominator would be zero). This simplifies to E = pc or p = E/c in this case. This basically says that the energy (E) and the momentum (p) of a photon are proportional, with c – the velocity of the wave – as the factor of proportionality.
3. The generation of radiation pressure can then be directly related to the momentum property of photons, as shown in the diagram below – which shows how radiation force could – perhaps – propel a space sailing ship. [Nice idea, but I’d rather bet on nuclear-thermal rocket technology.]
I said in my introduction to this post that light and matter are two very different things. They are, and the logic connecting matter waves and electromagnetic radiation is not straightforward – if there is any. Let’s look at the two equations that are supposed to relate the two – the de Broglie relation and the Planck relation:
1. The de Broglie relation E = hassigns a de Broglie frequency (i.e. the frequency of a complex-valued probability amplitude function) to a particle with mass m through the mass-energy equivalence relation E = mc2. However, the concept of a matter wave is rather complicated (if you don’t think so: read the two previous posts): matter waves have little – if anything – in common with electromagnetic waves. Feynman calls electromagnetic waves ‘real’ waves (just like water waves, or sound waves, or whatever other wave) as opposed to… Well – he does stop short of calling matter waves unreal but it’s obvious they look ‘less real’ than ‘real waves’. Indeed, these complex-valued psi functions (Ψ) – for which we have to square the modulus to get the probability of something happening in space and time, or to measure the likely value of some observable property of the system – are obviously ‘something else’! [I tried to convey their ‘reality’ as well as I could in my previous post, but I am not sure I did a good job – not all really.]
2. The Planck relation E = hν relates the energy of a photon – the so-called quantum of light (das Lichtquant as Einstein called it in 1905 – the term ‘photon’ was coined some 20 years later it is said) – to the frequency of the electromagnetic wave of which it is part. [That Greek symbol (ν) – it’s the letter nu (the ‘v’ in Greek is amalgamated with the ‘b’) – is quite confusing: it’s not the v for velocity.]
So, while the Planck relation (which goes back to 1905) obviously inspired Louis de Broglie (who introduced his theory on electron waves some 20 years later – in his PhD thesis of 1924 to be precise), their equations look the same but are different – and that’s probably the main reason why we keep two different symbols – f and ν – for the two frequencies.
Photons and electrons are obviously very different particles as well. Just to state the obvious:
1. Photons have zero rest mass, travel at the speed of light, have no electric charge, are bosons, and so on and so on, and so they behave differently (see, for example, my post on Bose and Fermi, which explains why one cannot make proton beam lasers). [As for the boson qualification, bosons are force carriers: photons in particular mediate (or carry) the electromagnetic force.]
2. Electrons do not weigh much and, hence, can attain speeds close to light (but it requires tremendous amounts of energy to accelerate them very near c) but so they do have some mass, they have electric charge (photons are electrically neutral), and they are fermions – which means they’re an entirely different ‘beast’ so to say when it comes to combining their probability amplitudes (so that’s why they’ll never get together in some kind of electron laser beam either – just like protons or neutrons – as I explain in my post on Bose and Fermi indeed).
That being said, there’s some connection of course (and that’s what’s being explored in QED):
1. Accelerating electric charges cause electromagnetic radiation (so moving charges (the negatively charged electrons) cause the electromagnetic field oscillations, but it’s the (neutral) photons that carry it).
2. Electrons absorb and emit photons as they gain/lose energy when going from one energy level to the other.
3. Most important of all, individual photons – just like electrons – also have a probability amplitude function – so that’s a de Broglie or matter wave function if you prefer that term.
That means photons can also be described in terms of some kind of complex wave packet, just like that electron I kept analyzing in my previous posts – until I (and surely you) got tired of it. That means we’re presented with the same type of mathematics. For starters, we cannot be happy with assigning a unique frequency to our (complex-valued) de Broglie wave, because that would – once again – mean that we have no clue whatsoever where our photon actually is. So, while the shape of the wave function below might well describe the E and B of a bona fide electromagnetic wave, it cannot describe the (real or imaginary) part of the probability amplitude of the photons we would associate with that wave.
constant frequency waveSo that doesn’t work. We’re back at analyzing wave packets – and, by now, you know how complicated that can be: I am sure you don’t want me to mention Fourier transforms again! So let’s turn to Feynman once again – the greatest of all (physics) teachers – to get his take on it. Now, the surprising thing is that, in his 1985 Lectures on Quantum Electrodynamics (QED), he doesn’t really care about the amplitude of a photon to be at point x at time t. What he needs to know is:
1. The amplitude of a photon to go from point A to B, and
2. The amplitude of a photon to be absorbed/emitted by an electron (a photon-electron coupling as it’s called).
And then he needs only one more thing: the amplitude of an electron to go from point A to B. That’s all he needs to explain EVERYTHING – in quantum electrodynamics that is. So that’s partial reflection, diffraction, interference… Whatever! In Feynman’s own words: “Out of these three amplitudes, we can make the whole world, aside from what goes on in nuclei, and gravitation, as always!” So let’s have a look at it.
I’ve shown some of his illustrations already in the Bose and Fermi post I mentioned above. In Feynman’s analysis, photons get emitted by some source and, as soon as they do, they travel with some stopwatch, as illustrated below. The speed with which the hand of the stopwatch turns is the angular frequency of the phase of the probability amplitude, and it’s length is the modulus -which, you’ll remember, we need to square to get a probability of something, so for the illustration below we have a probability of 0.2×0.2 = 4%. Probability of what? Relax. Let’s go step by step.
Let’s first relate this probability amplitude stopwatch to a theoretical wave packet, such as the one below – which is a nice Gaussian wave packet:
example of wave packet
This thing really fits the bill: it’s associated with a nice Gaussian probability distribution (aka as a normal distribution, because – despite its ideal shape (from a math point of view), it actually does describe many real-life phenomena), and we can easily relate the stopwatch’s angular frequency to the angular frequency of the phase of the wave. The only thing you’ll need to remember is that its amplitude is not constant in space and time: indeed, this photon is somewhere sometime, and that means it’s no longer there when it’s gone, and also that it’s not there when it hasn’t arrived yet. 🙂 So, as you long as you remember that, Feynman’s stopwatch is a great way to represent a photon (or any particle really). [Just think of a stopwatch in your hand with no hand, but then suddenly that hand grows from zero to 0.2 (or some other random value between 0 and 1) and then shrinks back from that random value to 0 as the photon whizzes by. […] Or find some other creative interpretation if you don’t like this one. :-)]
Now, of course we do not know at what time the photon leaves the source and so the hand of the stopwatch could be at 2 o’clock, 9 o’clock or whatever: so the phase could be shifted by any value really. However, the thing to note is that the stopwatch’s hand goes around and around at a steady (angular) speed.
That’s OK. We can’t know where the photon is because we’re obviously assuming a nice standardized light source emitting polarized light with a very specific color, i.e. all photons have the same frequency (so we don’t have to worry about spin and all that). Indeed, because we’re going to add and multiply amplitudes, we have to keep it simple (the complicated things should be left to clever people – or academics). More importantly, it’s OK because we don’t need to know the exact position of the hand of the stopwatch as the photon leaves the source in order to explain phenomena like the partial reflection of light on glass. What matters there is only how much the stopwatch hand turns in the short time it takes to go from the front surface of the glass to its back surface. That difference in phase is independent from the position of the stopwatch hand as it reaches the glass: it only depends on the angular frequency (i.e. the energy of the photon, or the frequency of the light beam) and the thickness of the glass sheet. The two cases below present two possibilities: a 5% chance of reflection and a 16% chance of reflection (16% is actually a maximum, as Feynman shows in that little book, but that doesn’t matter here).
partial reflection
But – Hey! – I am suddenly talking amplitudes for reflection here, and the probabilities that I am calculating (by adding amplitudes, not probabilities) are also (partial) reflection probabilities. Damn ! YOU ARE SMART! It’s true. But you get the idea, and I told you already that Feynman is not interested in the probability of a photon just being here or there or wherever. He’s interested in (1) the amplitude of it going from the source (i.e. some point A) to the glass surface (i.e. some other point B), and then (2) the amplitude of photon-electron couplings – which determine the above amplitudes for being reflected (i.e. being (back)scattered by an electron actually).
So what? Well… Nothing. That’s it. I just wanted you to give some sense of de Broglie waves for photons. The thing to note is that they’re like de Broglie waves for electrons. So they are as real or unreal as these electron waves, and they have close to nothing to do with the electromagnetic wave of which they are part. The only thing that relates them with that real wave so to say, is their energy level, and so that determines their de Broglie wavelength. So, it’s strange to say, but we have two frequencies for a photon: E= hν and E = hf. The first one is the Planck relation (E= hν): it associates the energy of a photon with the frequency of the real-life electromagnetic wave. The second is the de Broglie relation (E = hf): once we’ve calculated the energy of a photon using E= hν, we associate a de Broglie wavelength with the photon. So we imagine it as a traveling stopwatch with angular frequency ω = 2πf.
So that’s it (for now). End of story.
Now, you may want to know something more about these other amplitudes (that’s what I would want), i.e. the amplitude of a photon to go from A to B and this coupling amplitude and whatever else that may or may not be relevant. Right you are: it’s fascinating stuff. For example, you may or may not be surprised that photons have an amplitude to travel faster or slower than light from A to B, and that they actually have many amplitudes to go from A to B: one for each possible path. [Does that mean that the path does not have to be straight? Yep. Light can take strange paths – and it’s the interplay (i.e. the interference) between all these amplitudes that determines the most probable path – which, fortunately (otherwise our amplitude theory would be worthless), turns out to be the straight line.] We can summarize this in a really short and nice formula for the P(A to B) amplitude [note that the ‘P’ stands for photon, not for probability – Feynman uses an E for the related amplitude for an electron, so he writes E(A to B)].
However, I won’t make this any more complicated right now and so I’ll just reveal that P(A to B) depends on the so-called spacetime interval. This spacetime interval (I) is equal to I = (z2– z1)+ (y2– y1)+ (x2– x1)– (t2– t1)2, with the time and spatial distance being measured in equivalent units (so we’d use light-seconds for the unit of distance or, for the unit of time, the time it takes for light to travel one meter). I am sure you’ve heard about this interval. It’s used to explain the famous light cone – which determines what’s past and future in respect to the here and now in spacetime (or the past and present of some event in spacetime) in terms of
1. What could possibly have impacted the here and now (taking into account nothing can travel faster than light – even if we’ve mentioned some exceptions to this already, such as the phase velocity of a matter wave – but so that’s not a ‘signal’ and, hence, not in contradiction with relativity)?
2. What could possible be impacted by the here and now (again taking into account that nothing can travel faster than c)?
In short, the light cone defines the past, the here, and the future in spacetime in terms of (potential) causal relations. However, as this post has – once again – become too long already, I’ll need to write another post to discuss these other types of amplitudes – and how they are used in quantum electrodynamics. So my next post should probably say something about light-matter interaction, or on photons as the carriers of the electromagnetic force (both in light as well as in an atom – as it’s the electromagnetic force that keeps an electron in orbit around the (positively charged) nucleus). In case you wonder, yes, that’s Feynman diagrams – among other things.
Post scriptum: On frequency, wavelength and energy – and the particle- versus wave-like nature of electromagnetic waves
I wrote that gamma waves have a very definite particle character because of their very short wavelength. Indeed, most discussions of the electromagnetic spectrum will start by pointing out that higher frequencies or shorter wavelengths – higher frequency (f) implies shorter wavelength (λ) because the wavelength is the speed of the wave (c in this case) over the frequency: λ = c/f – will make the (electromagnetic) wave more particle-like. For example, I copied two illustrations from Feynman’s very first Lectures (Volume I, Lectures 2 and 5) in which he makes the point by showing
1. The familiar table of the electromagnetic spectrum (we could easily add a column for the wavelength (just calculate λ = c/f) and the energy (E = hf) besides the frequency), and
2. An illustration that shows how matter (a block of carbon of 1 cm thick in this case) looks like for an electromagnetic wave racing towards it. It does not look like Gruyère cheese, because Gruyère cheese is cheese with holes: matter is huge holes with just a tiny little bit of cheese ! Indeed, at the micro-level, matter looks like a lot of nothing with only a few tiny specks of matter sprinkled about!
And so then he goes on to describe how ‘hard’ rays (i.e. rays with short wavelengths) just plow right through and so on and so on.
electromagnetic spectrumcarbon close-up view
Now it will probably sound very stupid to non-autodidacts but, for a very long time, I was vaguely intrigued that the amplitude of a wave doesn’t seem to matter when looking at the particle- versus wave-like character of electromagnetic waves. Electromagnetic waves are transverse waves so they oscillate up and down, perpendicular to the direction of travel (as opposed to longitudinal waves, such as sound waves or pressure waves for example: these oscillate back and forth – in the same direction of travel). And photon paths are represented by wiggly lines, so… Well, you may not believe it but that’s why I stupidly thought it’s the amplitude that should matter, not the wavelength.
Indeed, the illustration below – which could be an example of how E or B oscillates in space and time – would suggest that lower amplitudes (smaller A’s) are the key to ‘avoiding’ those specks of matter. And if one can’t do anything about amplitude, then one may be forgiven to think that longer wavelengths – not shorter ones – are the key to avoiding those little ‘obstacles’ presented by atoms or nuclei in some crystal or non-crystalline structure. [Just jot it down: more wiggly lines increase the chance of hitting something.] But… Both lower amplitudes as well as longer wavelengths imply less energy. Indeed, the energy of a wave is, in general, proportional to the square of its amplitude and electromagnetic waves are no exception in this regard. As for wavelength, we have Planck’s relation. So what’s wrong in my very childish reasoning?
Cosine wave concepts
As usual, the answer is easy for those who already know it: neither wavelength nor amplitude have anything to do with how much space this wave actually takes as it propagates. But of course! You didn’t know that? Well… Sorry. Now I do. The vertical y axis might measure E and B indeed, but the graph and the nice animation above should not make you think that these field vectors actually occupy some space. So you can think of electromagnetic waves as particle waves indeed: we’ve got ‘something’ that’s traveling in a straight line, and it’s traveling at the speed of light. That ‘something’ is a photon, and it can have high or low energy. If it’s low-energy, it’s like a speck of dust: even if it travels at the speed of light, it is easy to deflect (i.e. scatter), and the ’empty space’ in matter (which is, of course, not empty but full of all kinds of electromagnetic disturbances) may well feel like jelly to it: it will get stuck (read: it will be absorbed somewhere or not even get through the first layer of atoms at all). If it’s high-energy, then it’s a different story: then the photon is like a tiny but very powerful bullet – same size as the speck of dust, and same speed, but much and much heavier. Such ‘bullet’ (e.g. a gamma ray photon) will indeed have a tendency to plow through matter like it’s air: it won’t care about all these low-energy fields in it.
It is, most probably, a very trivial point to make, but I thought it’s worth doing so.
[When thinking about the above, also remember the trivial relationship between energy and momentum for photons: p = E/c, so more energy means more momentum: a heavy truck crashing into your house will create more damage than a Mini at the same speed because the truck has much more momentum. So just use the mass-energy equivalence (E = mc2) and think about high-energy photons as armored vehicles and low-energy photons as mom-and-pop cars.]
Re-visiting the matter wave (II)
My previous post was, once again, littered with formulas – even if I had not intended it to be that way: I want to convey some kind of understanding of what an electron – or any particle at the atomic scale – actually is – with the minimum number of formulas necessary.
We know particles display wave behavior: when an electron beam encounters an obstacle or a slit that is somewhat comparable in size to its wavelength, we’ll observe diffraction, or interference. [I have to insert a quick note on terminology here: the terms diffraction and interference are often used interchangeably, but there is a tendency to use interference when we have more than one wave source and diffraction when there is only one wave source. However, I’ll immediately add that distinction is somewhat artificial. Do we have one or two wave sources in a double-slit experiment? There is one beam but the two slits break it up in two and, hence, we would call it interference. If it’s only one slit, there is also an interference pattern, but the phenomenon will be referred to as diffraction.]
We also know that the wavelength we are talking about it here is not the wavelength of some electromagnetic wave, like light. It’s the wavelength of a de Broglie wave, i.e. a matter wave: such wave is represented by an (oscillating) complex number – so we need to keep track of a real and an imaginary part – representing a so-called probability amplitude Ψ(x, t) whose modulus squared (│Ψ(x, t)│2) is the probability of actually detecting the electron at point x at time t. [The purists will say that complex numbers can’t oscillate – but I am sure you get the idea.]
You should read the phrase above twice: we cannot know where the electron actually is. We can only calculate probabilities (and, of course, compare them with the probabilities we get from experiments). Hence, when the wave function tells us the probability is greatest at point x at time t, then we may be lucky when we actually probe point x at time t and find it there, but it may also not be there. In fact, the probability of finding it exactly at some point x at some definite time t is zero. That’s just a characteristic of such probability density functions: we need to probe some region Δx in some time interval Δt.
If you think that is not very satisfactory, there’s actually a very common-sense explanation that has nothing to do with quantum mechanics: our scientific instruments do not allow us to go beyond a certain scale anyway. Indeed, the resolution of the best electron microscopes, for example, is some 50 picometer (1 pm = 1×10–12 m): that’s small (and resolutions get higher by the year), but so it implies that we are not looking at points – as defined in math that is: so that’s something with zero dimension – but at pixels of size Δx = 50×10–12 m.
The same goes for time. Time is measured by atomic clocks nowadays but even these clocks do ‘tick’, and these ‘ticks’ are discrete. Atomic clocks take advantage of the property of atoms to resonate at extremely consistent frequencies. I’ll say something more about resonance soon – because it’s very relevant for what I am writing about in this post – but, for the moment, just note that, for example, Caesium-133 (which was used to build the first atomic clock) oscillates at 9,192,631,770 cycles per second. In fact, the International Bureau of Standards and Weights re-defined the (time) second in 1967 to correspond to “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the Caesium-133 atom at rest at a temperature of 0 K.”
Don’t worry about it: the point to note is that when it comes to measuring time, we also have an uncertainty. Now, when using this Caesium-133 atomic clock, this uncertainty would be in the range of ±9.2×10–9 seconds (so that’s nanoseconds: 1 ns = 1×10–9 s), because that’s the rate at which this clock ‘ticks’. However, there are other (much more plausible) ways of measuring time: some of the unstable baryons have lifetimes in the range of a few picoseconds only (1 ps = 1×10–12 s) and the really unstable ones – known as baryon resonances – have lifetimes in the 1×10–22 to 1×10–24 s range. This we can only measure because they leave some trace after these particle collisions in particle accelerators and, because we have some idea about their speed, we can calculate their lifetime from the (limited) distance they travel before disintegrating. The thing to remember is that for time also, we have to make do with time pixels instead of time points, so there is a Δt as well. [In case you wonder what baryons are: they are particles consisting of three quarks, and the proton and the neutron are the most prominent (and most stable) representatives of this family of particles.]
So what’s the size of an electron? Well… It depends. We need to distinguish two very different things: (1) the size of the area where we are likely to find the electron, and (2) the size of the electron itself. Let’s start with the latter, because that’s the easiest question to answer: there is a so-called classical electron radius re, which is also known as the Thompson scattering length, which has been calculated as:
r_\mathrm{e} = \frac{1}{4\pi\varepsilon_0}\frac{e^2}{m_{\mathrm{e}} c^2} = 2.817 940 3267(27) \times 10^{-15} \mathrm{m}As for the constants in this formula, you know these by now: the speed of light c, the electron charge e, its mass me, and the permittivity of free space εe. For whatever it’s worth (because you should note that, in quantum mechanics, electrons do not have a size: they are treated as point-like particles, so they have a point charge and zero dimension), that’s small. It’s in the femtometer range (1 fm = 1×10–15 m). You may or may not remember that the size of a proton is in the femtometer range as well – 1.7 fm to be precise – and we had a femtometer size estimate for quarks as well: 0.7 m. So we have the rather remarkable result that the much heavier proton (its rest mass is 938 MeV/csas opposed to only 0.511 MeV MeV/c2, so the proton is 1835 times heavier) is 1.65 times smaller than the electron. That’s something to be explored later: for the moment, we’ll just assume the electron wiggles around a bit more – exactly because it’s lighterHere you just have to note that this ‘classical’ electron radius does measure something: it’s something ‘hard’ and ‘real’ because it scatters, absorbs or deflects photons (and/or other particles). In one of my previous posts, I explained how particle accelerators probe things at the femtometer scale, so I’ll refer you to that post (End of the Road to Reality?) and move on to the next question.
The question concerning the area where we are likely to detect the electron is more interesting in light of the topic of this post (the nature of these matter waves). It is given by that wave function and, from my previous post, you’ll remember that we’re talking the nanometer scale here (1 nm = 1×10–9 m), so that’s a million times larger than the femtometer scale. Indeed, we’ve calculated a de Broglie wavelength of 0.33 nanometer for relatively slow-moving electrons (electrons in orbit), and the slits used in single- or double-slit experiments with electrons are also nanotechnology. In fact, now that we are here, it’s probably good to look at those experiments in detail.
The illustration below relates the actual experimental set-up of a double-slit experiment performed in 2012 to Feynman’s 1965 thought experiment. Indeed, in 1965, the nanotechnology you need for this kind of experiment was not yet available, although the phenomenon of electron diffraction had been confirmed experimentally already in 1925 in the famous Davisson-Germer experiment. [It’s famous not only because electron diffraction was a weird thing to contemplate at the time but also because it confirmed the de Broglie hypothesis only two years after Louis de Broglie had advanced it!]. But so here is the experiment which Feynman thought would never be possible because of technology constraints:
Electron double-slit set-upThe insert in the upper-left corner shows the two slits: they are each 50 nanometer wide (50×10–9 m) and 4 micrometer tall (4×10–6 m). [The thing in the middle of the slits is just a little support. Please do take a few seconds to contemplate the technology behind this feat: 50 nm is 50 millionths of a millimeter. Try to imagine dividing one millimeter in ten, and then one of these tenths in ten again, and again, and once again, again, and again. You just can’t imagine that, because our mind is used to addition/subtraction and – to some extent – with multiplication/division: our mind can’t deal with with exponentiation really – because it’s not a everyday phenomenon.] The second inset (in the upper-right corner) shows the mask that can be moved to close one or both slits partially or completely.
Now, 50 nanometer is 150 times larger than the 0.33 nanometer range we got for ‘our’ electron, but it’s small enough to show diffraction and/or interference. [In fact, in this experiment (done by Bach, Pope, Liou and Batelaan from the University of Nebraska-Lincoln less than two years ago indeed), the beam consisted of electrons with an (average) energy of 600 eV and a de Broglie wavelength of 50 picometer. So that’s like the electrons used in electron microscopes. 50 pm is 6.6 times smaller than the 0.33 nm wavelength we calculated for our low-energy (70 eV) electron – but then the energy and the fact these electrons are guided in electromagnetic fields explain the difference. Let’s go to the results.
The illustration below shows the predicted pattern next to the observed pattern for the two scenarios:
1. We first close slit 2, let a lot of electrons go through it, and so we get a pattern described by the probability density function P1 = │Φ12. Here we see no interference but a typical diffraction pattern: the intensity follows a more or less normal (i.e. Gaussian) distribution. We then close slit 1 (and open slit 2 again), again let a lot of electrons through, and get a pattern described by the probability density function P2 = │Φ22. So that’s how we get P1 and P2.
2. We then open both slits, let a whole electrons through, and get according to the pattern described by probability density function P12 = │Φ122, which we get not from adding the probabilities P1 and P2 (hence, P12 ≠ P1 + P2) – as one would expect if electrons would behave like particles – but from adding the probability amplitudes. We have interference, rather than diffraction.
Predicted interference effectBut so what exactly is interfering? Well… The electrons. But that can’t be, can it?
The electrons are obviously particles, as evidenced from the impact they make – one by one – as they hit the screen as shown below. [If you want to know what screen, let me quote the researchers: “The resulting patterns were magnified by an electrostatic quadrupole lens and imaged on a two-dimensional microchannel plate and phosphorus screen, then recorded with a charge-coupled device camera. […] To study the build-up of the diffraction pattern, each electron was localized using a “blob” detection scheme: each detection was replaced by a blob, whose size represents the error in the localization of the detection scheme. The blobs were compiled together to form the electron diffraction patterns.” So there you go.]
Electron blobs
Look carefully at how this interference pattern becomes ‘reality’ as the electrons hit the screen one by one. And then say it: WAW !
Indeed, as predicted by Feynman (and any other physics professor at the time), even if the electrons go through the slits one by one, they will interfere – with themselves so to speak. [In case you wonder if these electrons really went through one by one, let me quote the researchers once again: “The electron source’s intensity was reduced so that the electron detection rate in the pattern was about 1 Hz. At this rate and kinetic energy, the average distance between consecutive electrons was 2.3 × 106 meters. This ensures that only one electron is present in the 1 meter long system at any one time, thus eliminating electron-electron interactions.” You don’t need to be a scientist or engineer to understand that, isn’t it?]
While this is very spooky, I have not seen any better way to describe the reality of the de Broglie wave: the particle is not some point-like thing but a matter wave, as evidenced from the fact that it does interfere with itself when forced to move through two slits – or through one slit, as evidenced by the diffraction patterns built up in this experiment when closing one of the two slits: the electrons went through one by one as well!
But so how does it relate to the characteristics of that wave packet which I described in my previous post? Let me sum up the salient conclusions from that discussion:
1. The wavelength λ of a wave packet is calculated directly from the momentum by using de Broglie‘s second relation: λ = h/p. In this case, the wavelength of the electrons averaged 50 picometer. That’s relatively small as compared to the width of the slit (50 nm) – a thousand times smaller actually! – but, as evidenced by the experiment, it’s small enough to show the ‘reality’ of the de Broglie wave.
2. From a math point (but, of course, Nature does not care about our math), we can decompose the wave packet in a finite or infinite number of component waves. Such decomposition is referred to, in the first case (finite number of composite waves or discrete calculus) as a Fourier analysis, or, in the second case, as a Fourier transform. A Fourier transform maps our (continuous) wave function, Ψ(x), to a (continuous) wave function in the momentum space, which we noted as φ(p). [In fact, we noted it as Φ(p) but I don’t want to create confusion with the Φ symbol used in the experiment, which is actually the wave function in space, so Ψ(x) is Φ(x) in the experiment – if you know what I mean.] The point to note is that uncertainty about momentum is related to uncertainty about position. In this case, we’ll have pretty standard electrons (so not much variation in momentum), and so the location of the wave packet in space should be fairly precise as well.
3. The group velocity of the wave packet (vg) – i.e. the envelope in which our Ψ wave oscillates – equals the speed of our electron (v), but the phase velocity (i.e. the speed of our Ψ wave itself) is superluminal: we showed it’s equal to (vp) = E/p = c2/v = c/β, with β = v/c, so that’s the ratio of the speed of our electron and the speed of light. Hence, the phase velocity will always be superluminal but will approach c as the speed of our particle approaches c. For slow-moving particles, we get astonishing values for the phase velocity, like more than a hundred times the speed of light for the electron we looked at in our previous post. That’s weird but it does not contradict relativity: if it helps, one can think of the wave packet as a modulation of an incredibly fast-moving ‘carrier wave’.
Is any of this relevant? Does it help you to imagine what the electron actually is? Or what that matter wave actually is? Probably not. You will still wonder: How does it look like? What is it in reality?
That’s hard to say. If the experiment above does not convey any ‘reality’ according to you, then perhaps the illustration below will help. It’s one I have used in another post too (An Easy Piece: Introducing Quantum Mechanics and the Wave Function). I took it from Wikipedia, and it represents “the (likely) space in which a single electron on the 5d atomic orbital of an atom would be found.” The solid body shows the places where the electron’s probability density (so that’s the squared modulus of the probability amplitude) is above a certain value – so it’s basically the area where the likelihood of finding the electron is higher than elsewhere. The hue on the colored surface shows the complex phase of the wave function.
So… Does this help?
You will wonder why the shape is so complicated (but it’s beautiful, isn’t it?) but that has to do with quantum-mechanical calculations involving quantum-mechanical quantities such as spin and other machinery which I don’t master (yet). I think there’s always a bit of a gap between ‘first principles’ in physics and the ‘model’ of a real-life situation (like a real-life electron in this case), but it’s surely the case in quantum mechanics! That being said, when looking at the illustration above, you should be aware of the fact that you are actually looking at a 3D representation of the wave function of an electron in orbit.
Indeed, wave functions of electrons in orbit are somewhat less random than – let’s say – the wave function of one of those baryon resonances I mentioned above. As mentioned in my Not So Easy Piece, in which I introduced the Schrödinger equation (i.e. one of my previous posts), they are solutions of a second-order partial differential equation – known as the Schrödinger wave equation indeed – which basically incorporates one key condition: these solutions – which are (atomic or molecular) ‘orbitals’ indeed – have to correspond to so-called stationary states or standing waves. Now what’s the ‘reality’ of that?
The illustration below comes from Wikipedia once again (Wikipedia is an incredible resource for autodidacts like me indeed) and so you can check the article (on stationary states) for more details if needed. Let me just summarize the basics:
1. A stationary state is called stationary because the system remains in the same ‘state’ independent of time. That does not mean the wave function is stationary. On the contrary, the wave function changes as function of both time and space – Ψ = Ψ(x, t) remember? – but it represents a so-called standing wave.
2. Each of these possible states corresponds to an energy state, which is given through the de Broglie relation: E = hf. So the energy of the state is proportional to the oscillation frequency of the (standing) wave, and Planck’s constant is the factor of proportionality. From a formal point of view, that’s actually the one and only condition we impose on the ‘system’, and so it immediately yields the so-called time-independent Schrödinger equation, which I briefly explained in the above-mentioned Not So Easy Piece (but I will not write it down here because it would only confuse you even more). Just look at these so-called harmonic oscillators below:
A and B represent a harmonic oscillator in classical mechanics: a ball with some mass m (mass is a measure for inertia, remember?) on a spring oscillating back and forth. In case you’d wonder what the difference is between the two: both the amplitude as well as the frequency of the movement are different. 🙂 A spring and a ball?
It represents a simple system. A harmonic oscillation is basically a resonance phenomenon: springs, electric circuits,… anything that swings, moves or oscillates (including large-scale things such as bridges and what have you – in his 1965 Lectures (Vol. I-23), Feynman even discusses resonance phenomena in the atmosphere in his Lectures) has some natural frequency ω0, also referred to as the resonance frequency, at which it oscillates naturally indeed: that means it requires (relatively) little energy to keep it going. How much energy it takes exactly to keep them going depends on the frictional forces involved: because the springs in A and B keep going, there’s obviously no friction involved at all. [In physics, we say there is no damping.] However, both springs do have a different k (that’s the key characteristic of a spring in Hooke’s Law, which describes how springs work), and the mass m of the ball might be different as well. Now, one can show that the period of this ‘natural’ movement will be equal to t0 = 2π/ω= 2π(m/k)1/2 or that ω= (m/k)–1/2. So we’ve got a A and a B situation which differ in k and m. Let’s go to the so-called quantum oscillator, illustrations C to H.
C to H in the illustration are six possible solutions to the Schrödinger Equation for this situation. The horizontal axis is position (and so time is the variable) – but we could switch the two independent variables easily: as I said a number of times already, time and space are interchangeable in the argument representing the phase (θ) of a wave provided we use the right units (e.g. light-seconds for distance and seconds for time): θ = ωt – kx. Apart from the nice animation, the other great thing about these illustrations – and the main difference with resonance frequencies in the classical world – is that they show both the real part (blue) as well as the imaginary part (red) of the wave function as a function of space (fixed in the x axis) and time (the animation).
Is this ‘real’ enough? If it isn’t, I know of no way to make it any more ‘real’. Indeed, that’s key to understanding the nature of matter waves: we have to come to terms with the idea that these strange fluctuating mathematical quantities actually represent something. What? Well… The spooky thing that leads to the above-mentioned experimental results: electron diffraction and interference.
Let’s explore this quantum oscillator some more. Another key difference between natural frequencies in atomic physics (so the atomic scale) and resonance phenomena in ‘the big world’ is that there is more than one possibility: each of the six possible states above corresponds to a solution and an energy state indeed, which is given through the de Broglie relation: E = hf. However, in order to be fully complete, I have to mention that, while G and H are also solutions to the wave equation, they are actually not stationary states. The illustration below – which I took from the same Wikipedia article on stationary states – shows why. For stationary states, all observable properties of the state (such as the probability that the particle is at location x) are constant. For non-stationary states, the probabilities themselves fluctuate as a function of time (and space of obviously), so the observable properties of the system are not constant. These solutions are solutions to the time-dependent Schrödinger equation and, hence, they are, obviously, time-dependent solutions.
StationaryStatesAnimationWe can find these time-dependent solutions by superimposing two stationary states, so we have a new wave function ΨN which is the sum of two others: ΨN = Ψ1 + Ψ2. [If you include the normalization factor (as you should to make sure all probabilities add up to 1), it’s actually ΨN = (2–1/2)(Ψ1 + Ψ2).] So G and H above still represent a state of a quantum harmonic oscillator (with a specific energy level proportional to h), but so they are not standing waves.
Let’s go back to our electron traveling in a more or less straight path. What’s the shape of the solution for that one? It could be anything. Well… Almost anything. As said, the only condition we can impose is that the envelope of the wave packet – its ‘general’ shape so to say – should not change. That because we should not have dispersion – as illustrated below. [Note that this illustration only represent the real or the imaginary part – not both – but you get the idea.]
That being said, if we exclude dispersion (because a real-life electron traveling in a straight line doesn’t just disappear – as do dispersive wave packets), then, inside of that envelope, the weirdest things are possible – in theory that is. Indeed, Nature does not care much about our Fourier transforms. So the example below, which shows a theoretical wave packet (again, the real or imaginary part only) based on some theoretical distribution of the wave numbers of the (infinite number) of component waves that make up the wave packet, may or may not represent our real-life electron. However, if our electron has any resemblance to real-life, then I would expect it to not be as well-behaved as the theoretical one that’s shown below.
example of wave packet
The shape above is usually referred to as a Gaussian wave packet, because of the nice normal (Gaussian) probability density functions that are associated with it. But we can also imagine a ‘square’ wave packet: a somewhat weird shape but – in terms of the math involved – as consistent as the smooth Gaussian wave packet, in the sense that we can demonstrate that the wave packet is made up of an infinite number of waves with an angular frequency ω that is linearly related to their wave number k, so the dispersion relation is ω = ak + b. [Remember we need to impose that condition to ensure that our wave packet will not dissipate (or disperse or disappear – whatever term you prefer.] That’s shown below: a Fourier analysis of a square wave.
Square wave packet
While we can construct many theoretical shapes of wave packets that respect the ‘no dispersion!’ condition, we cannot know which one will actually represent that electron we’re trying to visualize. Worse, if push comes to shove, we don’t know if these matter waves (so these wave packets) actually consist of component waves (or time-independent stationary states or whatever).
[…] OK. Let me finally admit it: while I am trying to explain you the ‘reality’ of these matter waves, we actually don’t know how real these matter waves actually are. We cannot ‘see’ or ‘touch’ them indeed. All that we know is that (i) assuming their existence, and (ii) assuming these matter waves are more or less well-behaved (e.g. that actual particles will be represented by a composite wave characterized by a linear dispersion relation between the angular frequencies and the wave numbers of its (theoretical) component waves) allows us to do all that arithmetic with these (complex-valued) probability amplitudes. More importantly, all that arithmetic with these complex numbers actually yields (real-valued) probabilities that are consistent with the probabilities we obtain through repeated experiments. So that’s what’s real and ‘not so real’ I’d say.
Indeed, the bottom-line is that we do not know what goes on inside that envelope. Worse, according to the commonly accepted Copenhagen interpretation of the Uncertainty Principle (and tons of experiments have been done to try to overthrow that interpretation – all to no avail), we never will.
Re-visiting the matter wave (I)
In my previous posts, I introduced a lot of wave formulas. They are essential to understanding waves – both real ones (e.g. electromagnetic waves) as well as probability amplitude functions. Probability amplitude function is quite a mouthful so let me call it a matter wave, or a de Broglie wave. The formulas are necessary to create true understanding – whatever that means to you – because otherwise we just keep on repeating very simplistic but nonsensical things such as ‘matter behaves (sometimes) like light’, ‘light behaves (sometimes) like matter’ or, combining both, ‘light and matter behave like wavicles’. Indeed: what does ‘like‘ mean? Like the same but different? 🙂 So that means it’s different. Let’s therefore re-visit the matter wave (i.e. the de Broglie wave) and point out the differences with light waves.
In fact, this post actually has its origin in a mistake in a post scriptum of a previous post (An Easy Piece: On Quantum Mechanics and the Wave Function), in which I wondered what formula to use for the energy E in the (first) de Broglie relation E = hf (with the frequency of the matter wave and h the Planck constant). Should we use (a) the kinetic energy of the particle, (b) the rest mass (mass is energy, remember?), or (c) its total energy? So let us first re-visit these de Broglie relations which, you’ll remember, relate energy and momentum to frequency (f) and wavelength (λ) respectively with the Planck constant as the factor of proportionality:
E = hf and p = h/λ
The de Broglie wave
I first tried kinetic energy in that E = h equation. However, if you use the kinetic energy formula (K.E. = mv2/2, with the velocity of the particle), then the second de Broglie relation (p = h/λ) does not come out right. The second de Broglie relation has the wavelength λ on the right side, not the frequency f. But it’s easy to go from one to the other: frequency and wavelength are related through the velocity of the wave (v). Indeed, the number of cycles per second (i.e. the frequency f) times the length of one cycle (i.e. the wavelength λ) gives the distance traveled by the wave per second, i.e. its velocity v. So fλ = v. Hence, using that kinetic energy formula and that very obvious fλ = v relation, we can write E = hf as mv2/2 = v/λ and, hence, after moving one of the two v’s in v2 (and the 1/2 factor) on the left side to the right side of this equation, we get mv = 2h/λ. So there we are:
p = mv = 2h/λ.
Well… No. The second de Broglie relation is just p = h/λ. There is no factor 2 in it. So what’s wrong?
A factor of 2 in an equation like this surely doesn’t matter, does it? It does. We are talking tiny wavelengths here but a wavelength of 1 nanometer (1×10–9 m) – this is just an example of the scale we’re talking about here – is not the same as a wavelength of 0.5 nm. There’s another problem too. Let’s go back to our an example of an electron with a mass of 9.1×10–31 kg (that’s very tiny, and so you’ll usually see it expressed in a unit that’s more appropriate to the atomic scale), moving about with a velocity of 2.2×106 m/s (that’s the estimated speed of orbit of an electron around a hydrogen nucleus: it’s fast (2,200 km per second), but still less than 1% of the speed of light), and let’s do the math.
[Before I do the math, however, let me quickly insert a line on that ‘other unit’ to measure mass. You will usually see it written down as eV, so that’s electronvolt. Electronvolt is a measure of energy but that’s OK because mass is energy according to Einstein’s mass-energy equation: E = mc2. The point to note is that the actual measure for mass at the atomic scale is eV/c2, so we make the unit even smaller by dividing the eV (which already is a very tiny amount of energy) by c2: 1 eV/ccorresponds to 1.782662×10−36 kg, so the mass of our electron (9.1×10–31 kg) is about 510,000 eV/c2, or 0.510 MeV/c2. I am spelling it out because you will often just see 0.510 MeV in older or more popular publications, but so don’t forget that cfactor. As for the calculations below, I just stick to the kg and m measures because they make the dimensions come out right.]
According to our kinetic energy formula (K.E. = mv2/2), these mass and velocity values correspond to an energy value of 22 ×10−19 Joule (the Joule is the so-called SI unit for energy – don’t worry about it right now). So, from the first de Broglie equation (f = E/h) – and using the right value for Planck’s constant (6.626 J·s), we get a frequency of 3.32×1015 hertz (hertz just means oscillations per second as you know). Now, using v once again, and fλ = v, we see that corresponds to a wavelength of 0.66 nanometer (0.66×10−9 m). [Just take the numbers and do the math.]
However, if we use the second de Broglie relation, which relates wavelength to momentum instead of energy, then we get 0.33 nanometer (0.33×10−9 m), so that’s half of the value we got from the first equation. So what is it: 0.33 or 0.66 nm? It’s that factor 2 again. Something is wrong.
It must be that kinetic energy formula. You’ll say we should include potential energy or something. No. That’s not the issue. First, we’re talking a free particle here: an electron moving in space (a vacuum) with no external forces acting on it, so it’s a field-free space (or a region of constant potential). Second, we could, of course, extend the analysis and include potential energy, and show how it’s converted to kinetic energy (like a stone falling from 100 m to 50 m: potential energy gets converted into kinetic energy) but making our analysis more complicated by including potential energy as well will not solve our problem here: it will only make you even more confused.
Then it must be some relativistic effect you’ll say. No. It’s true the formula for kinetic energy above only holds for relatively low speeds (as compared to light, so ‘relatively’ low can be thousands of km per second) but that’s not the problem here: we are talking electrons moving at non-relativistic speeds indeed, so their mass or energy is not (or hardly) affected by relativistic effects and, hence, we can indeed use the more simple non-relativistic formulas.
The real problem we’re encountering here is not with the equations: it’s the simplistic model of our wave. We are imagining one wave here indeed, with a single frequency, a single wavelength and, hence, one single velocity – which happens to coincide with the velocity of our particle. Such wave cannot possibly represent an actual de Broglie wave: the wave is everywhere and, hence, the particle it represents is nowhere. Indeed, a wave defined by a specific wavelength λ (or a wave number k = 2π/λ if we’re using complex number notation) and a specific frequency f or period T (or angular frequency ω = 2π/T = 2πf) will have a very regular shape – such as Ψ = Aei(ωt-kx) and, hence, the probability of actually locating that particle at some specific point in space will be the same everywhere: |Ψ|= |Aei(ωt-kx)|= A2. [If you are confused about the math here, I am sorry but I cannot re-explain this once again: just remember that our de Broglie wave represents probability amplitudes – so that’s some complex number Ψ = Ψ(x, t) depending on space and time – and that we need to take the modulus squared of that complex number to get the probability associated with some (real) value x (i.e. the space variable) and some value t (i.e. the time variable).]
So the actual matter wave of a real-life electron will be represented by a wave train, or a wave packet as it is usually referred to. Now, a wave packet is described by (at least) two types of wave velocity:
1. The so-called group velocity: the group velocity of a wave is denoted by vand is the velocity of the wave packet as a whole is traveling. Wikipedia defines it as “the velocity with which the overall shape of the waves’ amplitudes — known as the modulation or envelope of the wave — propagates through space.”
2. The so-called phase velocity: the phase velocity is denoted by vp and is what we usually associate with the velocity of a wave. It is just what it says it is: the rate at which the phase of the (composite) wave travels through space.
The term between brackets above – ‘composite’ – already indicates what it’s all about: a wave packet is to be analyzed as a composite wave: so it’s a wave composed of a finite or infinite number of component waves which all have their own wave number k and their own angular frequency ω. So the mistake we made above is that, naively, we just assumed that (i) there is only one simple wave (and, of course, there is only one wave, but it’s not a simple one: it’s a composite wave), and (ii) that the velocity v of our electron would be equal to the velocity of that wave. Now that we are a little bit more enlightened, we need to answer two questions in regard to point (ii):
1. Why would that be the case?
2. If it’s is the case, then what wave velocity are we talking about: the group velocity or the phase velocity?
To answer both questions, we need to look at wave packets once again, so let’s do that. Just to visualize things, I’ll insert – once more – that illustration you’ve seen in my other posts already:
Explanation of uncertainty principle
The de Broglie wave packet
The Wikipedia article on the group velocity of a wave has wonderful animations, which I would advise you to look at in order to make sure you are following me here. There are several possibilities:
1. The phase velocity and the group velocity are the same: that’s a rather unexciting possibility but it’s the easiest thing to work with and, hence, most examples will assume that this is the case.
2. The group and phase velocity are not the same, but our wave packet is ‘stable’, so to say. In other words, the individual peaks and troughs of the wave within the envelope travel at a different speed (the phase velocity vg), but the envelope as a whole (so the wave packet as such) does not get distorted as it travels through space.
3. The wave packet dissipates: in this case, we have a constant group velocity, but the wave packet delocalizes. Its width increases over time and so the wave packet diffuses – as time goes by – over a wider and wider region of space, until it’s actually no longer there. [In case you wonder why it did not group this third possibility under (2): it’s a bit difficult to assign a fixed phase velocity to a wave like this.]
How the wave packet will behave depends on the characteristics of the component waves. To be precise, it will depend on their angular frequency and their wave number and, hence, their individual velocities. First, note the relationship between these three variables: ω = 2πf and k = 2π/λ so ω/k = fλ = v. So these variables are not independent: if you have two values (e.g. v and k), you also have the third one (ω). Secondly, note that the component waves of our wave packet will have different wavelengths and, hence, different wave numbers k.
Now, the de Broglie relation p = ħk (i.e. the same relation as p = h/λ but we replace λ with 2π/k and then ħ is the so-called reduced Planck constant ħ = h/2π) makes it obvious that different wave numbers k correspond to different values p for the momentum of our electron, so allowing for a spread in k (or a spread in λ as illustrates above) amounts to allowing for some spread in p. That’s where the uncertainty principle comes in – which I actually derived from a theoretical wave function in my post on Fourier transforms and conjugate variables. But so that’s not something I want to dwell on here.
We’re interested in the ω’s. What about them? Well… ω can take any value really – from a theoretical point of view that is. Now you’ll surely object to that from a practical point of view, because you know what it implies: different velocities of the component waves. But you can’t object in a theoretical analysis like this. The only thing we could possible impose as a constraint is that our wave packet should not dissipate – so we don’t want it to delocalize and/or vanish after a while because we’re talking about some real-life electron here, and so that’s a particle which just doesn’t vanish like that.
To impose that condition, we need to look at the so-called dispersion relation. We know that we’ll have a whole range of wave numbers k, but so what values should ω take for a wave function to be ‘well-behaved’, i.e. not disperse in our case? Let’s first accept that k is some variable, the independent variable actually, and so then we associate some ω with each of these values k. So ω becomes the dependent variable (dependent on k that is) and that amounts to saying that we have some function ω = ω(k).
What kind of function? Well… It’s called the dispersion relation – for rather obvious reasons: because this function determines how the wave packet will behave: non-dispersive or – what we don’t want here – dispersive. Indeed, there are several possibilities:
1. The speed of all component waves is the same: that means that the ratio ω/k = is the same for all component waves. Now that’s the case only if ω is directly proportional to k, with the factor of proportionality equal to v. That means that we have a very simple dispersion relation: ω = αk with α some constant equal to the velocity of the component waves as well as the group and phase velocity of the composite wave. So all velocities are just the same (vvp = vg = α) and we’re in the first of the three cases explained at the beginning of this section.
2. There is a linear relation between ω and k but no direct proportionality, so we write ω = αk + β, in which β can be anything but not some function of k. So we allow different wave speeds for the component waves. The phase velocity will, once again, be equal to the ratio of the angular frequency and the wave number of the composite wave (whatever that is), but what about the group velocity, i.e. the velocity of our electron in this example? Well… One can show – but I will not do it here because it is quite a bit of work – that the group velocity of the wave packet will be equal to vg = dω/dk, i.e. the (first-order) derivative of ω with respect to k. So, if we want that wave packet to travel at the same speed of our electron (which is what we want of course because, otherwise, the wave packet would obviously not represent our electron), we’ll have to impose that dω/dk (or ∂ω/∂k if you would want to introduce more independent variables) equals v. In short, we have the condition that dω/dk = d(αk + β)/dk = α = k.
3. If the relation between ω and k is non-linear, well… Then we have none of the above. Hence, we then have a wave packet that gets distorted and stretched out and actually vanishes after a while. That case surely does not represent an electron.
Back to the de Broglie wave relations
Indeed, it’s now time to go back to our de Broglie relations – E = hf and p = h/λ and the question that sparked the presentation above: what formula to use for E? Indeed, for p it’s easy: we use p = mv and, if you want to include the case of relativistic speeds, you will write that formula in a more sophisticated way by making it explicit that the mass m is the relativistic mass m = γm0: the rest mass multiplied with a factor referred to as the Lorentz factor which, I am sure, you have seen before: γ = (1 – v2/c2)–1/2. At relativistic speeds (i.e. speeds close to c), this factor makes a difference: it adds mass to the rest mass. So the mass of a particle can be written as m = γm0, with m0 denoting the rest mass. At low speeds (e.g. 1% of the speed of light – as in the case of our electron), m will hardly differ from m0 and then we don’t need this Lorentz factor. It only comes into play at higher speeds.
At this point, I just can’t resist a small digression. It’s just to show that it’s not ‘relativistic effects’ that cause us trouble in finding the right energy equation for our E = hf relation. What’s kinetic energy? Well… There’s a few definitions – such as the energy gathered through converting potential energy – but one very useful definition in the current context is the following: kinetic energy is the excess of a particle over its rest mass energy. So when we’re looking at high-speed or high-energy particles, we will write the kinetic energy as K.E. = mc– m0c= (m – m0)c= γm0c– m0c= m0c2(γ – 1). Before you think I am trying to cheat you: where is the v of our particle? [To make it specific: think about our electron once again but not moving at leisure this time around: imagine it’s speeding at a velocity very close to c in some particle accelerator. Now, v is close to c but not equal to c and so it should not disappear. […]
It’s in the Lorentz factor γ = (1 – v2/c2)–1/2.
Now, we can expand γ into a binomial series (it’s basically an application of the Taylor series – but just check it online if you’re in doubt), so we can write γ as an infinite sum of the following terms: γ = 1 + (1/2)·v2/c+ (3/8)·v4/c+ (3/8)·v4/c+ (5/16)·v6/c+ … etcetera. [The binomial series is an infinite Taylor series, so it’s not to be confused with the (finite) binomial expansion.] Now, when we plug this back into our (relativistic) kinetic energy equation, we can scrap a few things (just do it) to get where I want to get:
K.E. = (1/2)·m0v+ (3/8)·m0v4/c+ (5/16)·m0v6/c+ … etcetera
So what? Well… That’s it – for the digression at least: see how our non-relativistic formula for kinetic energy (K.E. = m0v2/2 is only the first term of this series and, hence, just an approximation: at low speeds, the second, third etcetera terms represent close to nothing (and more close to nothing as you check out the fourth, fifth etcetera terms). OK, OK… You’re getting tired of these games. So what? Should we use this relativistic kinetic energy formula in the de Broglie relation?
No. As mentioned above already, we don’t need any relativistic correction, but the relativistic formula above does come in handy to understand the next bit. What’s the next bit about?
Well… It turns out that we actually do have to use the total energy – including (the energy equivalent to) the rest mass of our electron – in the de Broglie relation E = hf.
If you think a few seconds about the math of this – so we’d use γm0c2 instead of (1/2)m0v2 (so we use the speed of light instead of the speed of our particle) – you’ll realize we’ll be getting some astronomical frequency (we got that already but so here we are talking some kind of truly fantastic frequency) and, hence, combining that with the wavelength we’d derive from the other de Broglie equation (p = h/λ) we’d surely get some kind of totally unreal speed. Whatever it is, it will surely have nothing to do with our electron, does it?
Let’s go through the math.
The wavelength is just the same as that one given by p = h/λ, so we have λ = 0.33 nanometer. Don’t worry about this. That’s what it is indeed. Check it out online: it’s about a thousand times smaller than the wavelength of (visible) light but that’s OK. We’re talking something real here. That’s why electron microscopes can ‘see’ stuff that light microscopes can’t: their resolution is about a thousand times higher indeed.
But so when we take the first equation once again (E =hf) and calculate the frequency from f = γm0c2/h, we get an frequency in the neighborhood of 12.34×1019 herz. So that gives a velocity of v = fλ = 4.1×1010 meter per second (m/s). But… THAT’S MORE THAN A HUNDRED TIMES THE SPEED OF LIGHT. Surely, we must have got it wrong.
We don’t. The velocity we are calculating here is the phase velocity vp of our matter wave – and IT’S REAL! More in general, it’s easy to show that this phase velocity is equal to vp = fλ = E/p = (γm0c2/h)·(h/γm0v) = c2/v. Just fill in the values for c and v (3×108 and 2.2×106 respectively and you will get the same answer.
But that’s not consistent with relativity, is it? It is: phase velocities can be (and, in fact, usually are – as evidenced by our real-life example) superluminal as they say – i.e. much higher than the speed of light. However, because they carry no information – the wave packet shape is the ‘information’, i.e. the (approximate) location of our electron – such phase velocities do not conflict with relativity theory. It’s like amplitude modulation, like AM radiowaves): the modulation of the amplitude carries the signal, not the carrier wave.
The group velocity, on the other hand, can obviously not be faster than and, in fact, should be equal to the speed of our particle (i.e. the electron). So how do we calculate that? We don’t have any formula ω(k) here, do we? No. But we don’t need one. Indeed, we can write:
v= ∂ω/∂k = ∂(E/ ħ)/∂(p/ ħ) = ∂E/∂p
[Do you see why we prefer the ∂ symbol instead of the d symbol now? ω is a function of k but it’s – first and foremost – a function of E, so a partial derivative sign is quite appropriate.]
So what? Well… Now you can use either the relativistic or non-relativistic relation between E and p to get a value for ∂E/∂p. Let’s take the non-relativistic one first (E = p2/2m) : ∂E/∂p = ∂(p2/2m)/∂p = p/m = v. So we get the velocity of our electron! Just like we wanted. 🙂 As for the relativistic formula (E = (p2c+ m02c4)1/2), well… I’ll let you figure that one out yourself. [You can also find it online in case you’d be desperate.]
Wow! So there we are. That was quite something! I will let you digest this for now. It’s true I promised to ‘point out the differences between matter waves and light waves’ in my introduction but this post has been lengthy enough. I’ll save those ‘differences’ for the next post. In the meanwhile, I hope you enjoyed and – more importantly – understood this one. If you did, you’re a master! A real one! 🙂
A not so easy piece: introducing the wave equation (and the Schrödinger equation)
The title above refers to a previous post: An Easy Piece: Introducing the wave function.
Indeed, I may have been sloppy here and there – I hope not – and so that’s why it’s probably good to clarify that the wave function (usually represented as Ψ – the psi function) and the wave equation (Schrödinger’s equation, for example – but there are other types of wave equations as well) are two related but different concepts: wave equations are differential equations, and wave functions are their solutions.
Indeed, from a mathematical point of view, a differential equation (such as a wave equation) relates a function (such as a wave function) with its derivatives, and its solution is that function or – more generally – the set (or family) of functions that satisfies this equation.
The function can be real-valued or complex-valued, and it can be a function involving only one variable (such as y = y(x), for example) or more (such as u = u(x, t) for example). In the first case, it’s a so-called ordinary differential equation. In the second case, the equation is referred to as a partial differential equation, even if there’s nothing ‘partial’ about it: it’s as ‘complete’ as an ordinary differential equation (the name just refers to the presence of partial derivatives in the equation). Hence, in an ordinary differential equation, we will have terms involving dy/dx and/or d2y/dx2, i.e. the first and second derivative of y respectively (and/or higher-order derivatives, depending on the degree of the differential equation), while in partial differential equations, we will see terms involving ∂u/∂t and/or ∂u2/∂x(and/or higher-order partial derivatives), with ∂ replacing d as a symbol for the derivative.
The independent variables could also be complex-valued but, in physics, they will usually be real variables (or scalars as real numbers are also being referred to – as opposed to vectors, which are nothing but two-, three- or more-dimensional numbers really). In physics, the independent variables will usually be x – or let’s use r = (x, y, z) for a change, i.e. the three-dimensional space vector – and the time variable t. An example is that wave function which we introduced in our ‘easy piece’.
Ψ(r, t) = Aei(p·r – Et)ħ
[If you read the Easy Piece, then you might object that this is not quite what I wrote there, and you are right: I wrote Ψ(r, t) = Aei(p/ħr – ωt). However, here I am just introducing the other de Broglie relation (i.e. the one relating energy and frequency): E = hf =ħω and, hence, ω = E/ħ. Just re-arrange a bit and you’ll see it’s the same.]
From a physics point of view, a differential equation represents a system subject to constraints, such as the energy conservation law (the sum of the potential and kinetic energy remains constant), and Newton’s law of course: F = d(mv)/dt. A differential equation will usually also be given with one or more initial conditions, such as the value of the function at point t = 0, i.e. the initial value of the function. To use Wikipedia’s definition: “Differential equations arise whenever a relation involving some continuously varying quantities (modeled by functions) and their rates of change in space and/or time (expressed as derivatives) is known or postulated.”
That sounds a bit more complicated, perhaps, but it means the same: once you have a good mathematical model of a physical problem, you will often end up with a differential equation representing the system you’re looking at, and then you can do all kinds of things, such as analyzing whether or not the actual system is in an equilibrium and, if not, whether it will tend to equilibrium or, if not, what the equilibrium conditions would be. But here I’ll refer to my previous posts on the topic of differential equations, because I don’t want to get into these details – as I don’t need them here.
The one thing I do need to introduce is an operator referred to as the gradient (it’s also known as the del operator, but I don’t like that word because it does not convey what it is). The gradient – denoted by ∇ – is a shorthand for the partial derivatives of our function u or Ψ with respect to space, so we write:
∇ = (∂/∂x, ∂/∂y, ∂/∂z)
You should note that, in physics, we apply the gradient only to the spatial variables, not to time. For the derivative in regard to time, we just write ∂u/∂t or ∂Ψ/∂t.
Of course, an operator means nothing until you apply it to a (real- or complex-valued) function, such as our u(x, t) or our Ψ(r, t):
∇u = ∂u/∂x and ∇Ψ = (∂Ψ/∂x, ∂Ψ/∂y, ∂Ψ/∂z)
As you can see, the gradient operator returns a vector with three components if we apply it to a real- or complex-valued function of r, and so we can do all kinds of funny things with it combining it with the scalar or vector product, or with both. Here I need to remind you that, in a vector space, we can multiply vectors using either (i) the scalar product, aka the dot product (because of the dot in its notation: ab) or (ii) the vector product, aka as the cross product (yes, because of the cross in its notation: b).
So we can define a whole range of new operators using the gradient and these two products, such as the divergence and the curl of a vector field. For example, if E is the electric field vector (I am using an italic bold-type E so you should not confuse E with the energy E, which is a scalar quantity), then div E = ∇•E, and curl E =∇×E. Taking the divergence of a vector will yield some number (so that’s a scalar), while taking the curl will yield another vector.
I am mentioning these operators because you will often see them. A famous example is the set of equations known as Maxwell’s equations, which integrate all of the laws of electromagnetism and from which we can derive the electromagnetic wave equation:
(1) ∇•E = ρ/ε(Gauss’ law)
(2) ∇×E = –∂B/∂t (Faraday’s law)
(3) ∇•B = 0
(4) c2∇×B = j+ ∂E/∂t
I should not explain these but let me just remind you of the essentials:
1. The first equation (Gauss’ law) can be derived from the equations for Coulomb’s law and the forces acting upon a charge q in an electromagnetic field: F = q(E + v×B) – with B the magnetic field vector (F is also referred to as the Lorentz force: it’s the combined force on a charged particle caused by the electric and magnetic fields; v the velocity of the (moving) charge; ρ the charge density (so charge is thought of as being distributed in space, rather than being packed into points, and that’s OK because our scale is not the quantum-mechanical one here); and, finally, ε0 the electric constant (some 8.854×10−12 farads per meter).
2. The second equation (Faraday’s law) gives the electric field associated with a changing magnetic field.
3. The third equation basically states that there is no such thing as a magnetic charge: there are only electric charges.
4. Finally, in the last equation, we have a vector j representing the current density: indeed, remember than magnetism only appears when (electric) charges are moving, so if there’s an electric current. As for the equation itself, well… That’s a more complicated story so I will leave that for the post scriptum.
We can do many more things: we can also take the curl of the gradient of some scalar, or the divergence of the curl of some vector (both have the interesting property that they are zero), and there are many more possible combinations – some of them useful, others not so useful. However, this is not the place to introduce differential calculus of vector fields (because that’s what it is).
The only other thing I need to mention here is what happens when we apply this gradient operator twice. Then we have an new operator ∇•∇ = ∇which is referred to as the Laplacian. In fact, when we say ‘apply ∇ twice’, we are actually doing a dot product. Indeed, ∇ returns a vector, and so we are going to multiply this vector once again with a vector using the dot product rule: a= ∑aib(so we multiply the individual vector components and then add them). In the case of our functions u and Ψ, we get:
∇•(∇u) =∇•∇u = (∇•∇)u = ∇u =∂2u/∂x2
∇•(∇Ψ) = ∇Ψ = ∂2Ψ/∂x+ ∂2Ψ/∂y+ ∂2Ψ/∂z2
Now, you may wonder what it means to take the derivative (or partial derivative) of a complex-valued function (which is what we are doing in the case of Ψ) but don’t worry about that: a complex-valued function of one or more real variables, such as our Ψ(x, t), can be decomposed as Ψ(x, t) =ΨRe(x, t) + iΨIm(x, t), with ΨRe and ΨRe two real-valued functions representing the real and imaginary part of Ψ(x, t) respectively. In addition, the rules for integrating complex-valued functions are, to a large extent, the same as for real-valued functions. For example, if z is a complex number, then dez/dz = ez and, hence, using this and other very straightforward rules, we can indeed find the partial derivatives of a function such as Ψ(r, t) = Aei(p·r – Et)ħ with respect to all the (real-valued) variables in the argument.
The electromagnetic wave equation
OK. That’s enough math now. We are ready now to look at – and to understand – a real wave equation – I mean one that actually represents something in physics. Let’s take Maxwell’s equations as a start. To make it easy – and also to ensure that you have easy access to the full derivation – we’ll take the so-called Heaviside form of these equations:
Heaviside form of Maxwell's equations
This Heaviside form assumes a charge-free vacuum space, so there are no external forces acting upon our electromagnetic wave. There are also no other complications such as electric currents. Also, the c2 (i.e. the square of the speed of light) is written here c2 = 1/με, with μ and ε the so-called permeability (μ) and permittivity (ε) respectively (c0, μand ε0 are the values in a vacuum space: indeed, light travels slower elsewhere (e.g. in glass) – if at all).
Now, these four equations can be replaced by just two, and it’s these two equations that are referred to as the electromagnetic wave equation(s):
electromagnetic wave equation
The derivation is not difficult. In fact, it’s much easier than the derivation for the Schrödinger equation which I will present in a moment. But, even if it is very short, I will just refer to Wikipedia in case you would be interested in the details (see the article on the electromagnetic wave equation). The point here is just to illustrate what is being done with these wave equations and why – not so much howIndeed, you may wonder what we have gained with this ‘reduction’.
The answer to this very legitimate question is easy: the two equations above are second-order partial differential equations which are relatively easy to solve. In other words, we can find a general solution, i.e. a set or family of functions that satisfy the equation and, hence, can represent the wave itself. Why a set of functions? If it’s a specific wave, then there should only be one wave function, right? Right. But to narrow our general solution down to a specific solution, we will need extra information, which are referred to as initial conditions, boundary conditions or, in general, constraints. [And if these constraints are not sufficiently specific, then we may still end up with a whole bunch of possibilities, even if they narrowed down the choice.]
Let’s give an example by re-writing the above wave equation and using our function u(x, t) or, to simplify the analysis, u(x, t) – so we’re looking at a plane wave traveling in one dimension only:
Wave equation for u
There are many functional forms for u that satisfy this equation. One of them is the following:
general solution for wave equation
This resembles the one I introduced when presenting the de Broglie equations, except that – this time around – we are talking a real electromagnetic wave, not some probability amplitude. Another difference is that we allow a composite wave with two components: one traveling in the positive x-direction, and one traveling in the negative x-direction. Now, if you read the post in which I introduced the de Broglie wave, you will remember that these Aei(kx–ωt) or Be–i(kx+ωt) waves give strange probabilities. However, because we are not looking at some probability amplitude here – so it’s not a de Broglie wave but a real wave (so we use complex number notation only because it’s convenient but, in practice, we’re only considering the real part), this functional form is quite OK.
That being said, the following functional form, representing a wave packet (aka a wave train) is also a solution (or a set of solutions better):
Wave packet equation
Huh? Well… Yes. If you really can’t follow here, I can only refer you to my post on Fourier analysis and Fourier transforms: I cannot reproduce that one here because that would make this post totally unreadable. We have a wave packet here, and so that’s the sum of an infinite number of component waves that interfere constructively in the region of the envelope (so that’s the location of the packet) and destructively outside. The integral is just the continuum limit of a summation of n such waves. So this integral will yield a function u with x and t as independent variables… If we know A(k) that is. Now that’s the beauty of these Fourier integrals (because that’s what this integral is).
Indeed, in my post on Fourier transforms I also explained how these amplitudes A(k) in the equation above can be expressed as a function of u(x, t) through the inverse Fourier transform. In fact, I actually presented the Fourier transform pair Ψ(x) and Φ(p) in that post, but the logic is same – except that we’re inserting the time variable t once again (but with its value fixed at t=0):
Fourier transformOK, you’ll say, but where is all of this going? Be patient. We’re almost done. Let’s now introduce a specific initial condition. Let’s assume that we have the following functional form for u at time t = 0:
u at time 0
You’ll wonder where this comes from. Well… I don’t know. It’s just an example from Wikipedia. It’s random but it fits the bill: it’s a localized wave (so that’s a a wave packet) because of the very particular form of the phase (θ = –x2+ ik0x). The point to note is that we can calculate A(k) when inserting this initial condition in the equation above, and then – finally, you’ll say – we also get a specific solution for our u(x, t) function by inserting the value for A(k) in our general solution. In short, we get:
u final form
As mentioned above, we are actually only interested in the real part of this equation (so that’s the e with the exponent factor (note there is no in it, so it’s just some real number) multiplied with the cosine term).
However, the example above shows how easy it is to extend the analysis to a complex-valued wave function, i.e. a wave function describing a probability amplitude. We will actually do that now for Schrödinger’s equation. [Note that the example comes from Wikipedia’s article on wave packets, and so there is a nice animation which shows how this wave packet (be it the real or imaginary part of it) travels through space. Do watch it!]
Schrödinger’s equation
Let me just write it down:
Schrodinger's equation
That’s it. This is the Schrödinger equation – in a somewhat simplified form but it’s OK.
[…] You’ll find that equation above either very simple or, else, very difficult depending on whether or not you understood most or nothing at all of what I wrote above it. If you understood something, then it should be fairly simple, because it hardly differs from the other wave equation.
Indeed, we have that imaginary unit (i) in front of the left term, but then you should not panic over that: when everything is said and done, we are working here with the derivative (or partial derivative) of a complex-valued function, and so it should not surprise us that we have an i here and there. It’s nothing special. In fact, we had them in the equation above too, but they just weren’t explicit. The second difference with the electromagnetic wave equation is that we have a first-order derivative of time only (in the electromagnetic wave equation we had 2u/∂t2, so that’s a second-order derivative). Finally, we have a -1/2 factor in front of the right-hand term, instead of c2. OK, so what? It’s a different thing – but that should not surprise us: when everything is said and done, it is a different wave equation because it describes something else (not an electromagnetic wave but a quantum-mechanical system).
To understand why it’s different, I’d need to give you the equivalent of Maxwell’s set of equations for quantum mechanics, and then show how this wave equation is derived from them. I could do that. The derivation is somewhat lengthier than for our electromagnetic wave equation but not all that much. The problem is that it involves some new concepts which we haven’t introduced as yet – mainly some new operators. But then we have introduced a lot of new operators already (such as the gradient and the curl and the divergence) so you might be ready for this. Well… Maybe. The treatment is a bit lengthy, and so I’d rather do in a separate post. Why? […] OK. Let me say a few things about it then. Here we go:
• These new operators involve matrix algebra. Fine, you’ll say. Let’s get on with it. Well… It’s matrix algebra with matrices with complex elements, so if we write a n×m matrix A as A = (aiaj), then the elements aiaj (i = 1, 2,… n and j = 1, 2,… m) will be complex numbers.
• That allows us to define Hermitian matrices: a Hermitian matrix is a square matrix A which is the same as the complex conjugate of its transpose.
• We can use such matrices as operators indeed: transformations acting on a column vector X to produce another column vector AX.
• Now, you’ll remember – from your course on matrix algebra with real (as opposed to complex) matrices, I hope – that we have this very particular matrix equation AX = λX which has non-trivial solutions (i.e. solutions X ≠ 0) if and only if the determinant of A-λI is equal to zero. This condition (det(A-λI) = 0) is referred to as the characteristic equation.
• This characteristic equation is a polynomial of degree n in λ and its roots are called eigenvalues or characteristic values of the matrix A. The non-trivial solutions X ≠ 0 corresponding to each eigenvalue are called eigenvectors or characteristic vectors.
Now – just in case you’re still with me – it’s quite simple: in quantum mechanics, we have the so-called Hamiltonian operator. The Hamiltonian in classical mechanics represents the total energy of the system: H = T + V (total energy H = kinetic energy T + potential energy V). Here we have got something similar but different. 🙂 The Hamiltonian operator is written as H-hat, i.e. an H with an accent circonflexe (as they say in French). Now, we need to let this Hamiltonian operator act on the wave function Ψ and if the result is proportional to the same wave function Ψ, then Ψ is a so-called stationary state, and the proportionality constant will be equal to the energy E of the state Ψ. These stationary states correspond to standing waves, or ‘orbitals’, such as in atomic orbitals or molecular orbitals. So we have:
E\Psi=\hat H \Psi
I am sure you are no longer there but, in fact, that’s it. We’re done with the derivation. The equation above is the so-called time-independent Schrödinger equation. It’s called like that not because the wave function is time-independent (it is), but because the Hamiltonian operator is time-independent: that obviously makes sense because stationary states are associated with specific energy levels indeed. However, if we do allow the energy level to vary in time (which we should do – if only because of the uncertainty principle: there is no such thing as a fixed energy level in quantum mechanics), then we cannot use some constant for E, but we need a so-called energy operator. Fortunately, this energy operator has a remarkably simple functional form:
\hat{E} \Psi = i\hbar\dfrac{\partial}{\partial t}\Psi = E\Psi Now if we plug that in the equation above, we get our time-dependent Schrödinger equation
OK. You probably did not understand one iota of this but, even then, you will object that this does not resemble the equation I wrote at the very beginning: i(u/∂t) = (-1/2)2u.
You’re right, but we only need one more step for that. If we leave out potential energy (so we assume a particle moving in free space), then the Hamiltonian can be written as:
\hat{H} = -\frac{\hbar^2}{2m}\nabla^2
You’ll ask me how this is done but I will be short on that: the relationship between energy and momentum is being used here (and so that’s where the 2m factor in the denominator comes from). However, I won’t say more about it because this post would become way too lengthy if I would include each and every derivation and, remember, I just want to get to the result because the derivations here are not the point: I want you to understand the functional form of the wave equation only. So, using the above identity and, OK, let’s be somewhat more complete and include potential energy once again, we can write the time-dependent wave equation as:
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r},t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},t) + V(\mathbf{r},t)\Psi(\mathbf{r},t)
Now, how is the equation above related to i(u/∂t) = (-1/2)2u? It’s a very simplified version of it: potential energy is, once again, assumed to be not relevant (so we’re talking a free particle again, with no external forces acting on it) but the real simplification is that we give m and ħ the value 1, so m = ħ = 1. Why?
Well… My initial idea was to do something similar as I did above and, hence, actually use a specific example with an actual functional form, just like we did for that the real-valued u(x, t) function. However, when I look at how long this post has become already, I realize I should not do that. In fact, I would just copy an example from somewhere else – probably Wikipedia once again, if only because their examples are usually nicely illustrated with graphs (and often animated graphs). So let me just refer you here to the other example given in the Wikipedia article on wave packets: that example uses that simplified i(u/∂t) = (-1/2)2u equation indeed. It actually uses the same initial condition:
u at time 0
However, because the wave equation is different, the wave packet behaves differently. It’s a so-called dispersive wave packet: it delocalizes. Its width increases over time and so, after a while, it just vanishes because it diffuses all over space. So there’s a solution to the wave equation, given this initial condition, but it’s just not stable – as a description of some particle that is (from a mathematical point of view – or even a physical point of view – there is no issue).
In any case, this probably all sounds like Chinese – or Greek if you understand Chinese :-). I actually haven’t worked with these Hermitian operators yet, and so it’s pretty shaky territory for me myself. However, I felt like I had picked up enough math and physics on this long and winding Road to Reality (I don’t think I am even halfway) to give it a try. I hope I succeeded in passing the message, which I’ll summarize as follows:
1. Schrödinger’s equation is just like any other differential equation used in physics, in the sense that it represents a system subject to constraints, such as the relationship between energy and momentum.
2. It will have many general solutions. In other words, the wave function – which describes a probability amplitude as a function in space and time – will have many general solutions, and a specific solution will depend on the initial conditions.
3. The solution(s) can represent stationary states, but not necessary so: a wave (or a wave packet) can be non-dispersive or dispersive. However, when we plug the wave function into the wave equation, it will satisfy that equation.
That’s neither spectacular nor difficult, is it? But, perhaps, it helps you to ‘understand’ wave equations, including the Schrödinger equation. But what is understanding? Dirac once famously said: “I consider that I understand an equation when I can predict the properties of its solutions, without actually solving it.”
Hmm… I am not quite there yet, but I am sure some more practice with it will help. 🙂
Post scriptum: On Maxwell’s equations
First, we should say something more about these two other operators which I introduced above: the divergence and the curl. First on the divergence.
The divergence of a field vector E (or B) at some point r represents the so-called flux of E, i.e. the ‘flow’ of E per unit volume. So flux and divergence both deal with the ‘flow’ of electric field lines away from (positive) charges. [The ‘away from’ is from positive charges indeed – as per the convention: Maxwell himself used the term ‘convergence’ to describe flow towards negative charges, but so his ‘convention’ did not survive. Too bad, because I think convergence would be much easier to remember.]
So if we write that ∇•ρ/ε0, then it means that we have some constant flux of E because of some (fixed) distribution of charges.
Now, we already mentioned that equation (2) in Maxwell’s set meant that there is no such thing as a ‘magnetic’ charge: indeed, ∇•B = 0 means there is no magnetic flux. But, of course, magnetic fields do exist, don’t they? They do. A current in a wire, for example, i.e. a bunch of steadily moving electric charges, will induce a magnetic field according to Ampère’s law, which is part of equation (4) in Maxwell’s set: c2∇×B = j0, with j representing the current density and ε0 the electric constant.
Now, at this point, we have this curl: ∇×B. Just like divergence (or convergence as Maxwell called it – but then with the sign reversed), curl also means something in physics: it’s the amount of ‘rotation’, or ‘circulation’ as Feynman calls it, around some loop.
So, to summarize the above, we have (1) flux (divergence) and (2) circulation (curl) and, of course, the two must be related. And, while we do not have any magnetic charges and, hence, no flux for B, the current in that wire will cause some circulation of B, and so we do have a magnetic field. However, that magnetic field will be static, i.e. it will not change. Hence, the time derivative ∂B/∂t will be zero and, hence, from equation (2) we get that ∇×E = 0, so our electric field will be static too. The time derivative ∂E/∂t which appears in equation (4) also disappears and we just have c2∇×B = j0. This situation – of a constant magnetic and electric field – is described as electrostatics and magnetostatics respectively. It implies a neat separation of the four equations, and it makes magnetism and electricity appear as distinct phenomena. Indeed, as long as charges and currents are static, we have:
[I] Electrostatics: (1) ∇•E = ρ/εand (2) ∇×E = 0
[II] Magnetostatics: (3) c2∇×B = jand (4) ∇•B = 0
The first two equations describe a vector field with zero curl and a given divergence (i.e. the electric field) while the third and fourth equations second describe a seemingly separate vector field with a given curl but zero divergence. Now, I am not writing this post scriptum to reproduce Feynman’s Lectures on Electromagnetism, and so I won’t say much more about this. I just want to note two points:
1. The first point to note is that factor cin the c2∇×B = jequation. That’s something which you don’t have in the ∇•E = ρ/εequation. Of course, you’ll say: So what? Well… It’s weird. And if you bring it to the other side of the equation, it becomes clear that you need an awful lot of current for a tiny little bit of magnetic circulation (because you’re dividing by c , so that’s a factor 9 with 16 zeroes after it (9×1016): an awfully big number in other words). Truth be said, it reveals something very deep. Hmm? Take a wild guess. […] Relativity perhaps? Well… Yes!
It’s obvious that we buried v somewhere in this equation, the velocity of the moving charges. But then it’s part of j of course: the rate at which charge flows through a unit area per second. But – Hey! – velocity as compared to what? What’s the frame of reference? The frame of reference is us obviously or – somewhat less subjective – the stationary charges determining the electric field according to equation (1) in the set above: ∇•E = ρ/ε0. But so here we can ask the same question: stationary in what reference frame? As compared to the moving charges? Hmm… But so how does it work with relativity? I won’t copy Feynman’s 13th Lecture here, but so, in that lecture, he analyzes what happens to the electric and magnetic force when we look at the scene from another coordinate system – let’s say one that moves parallel to the wire at the same speed as the moving electrons, so – because of our new reference frame – the ‘moving electrons’ now appear to have no speed at all but, of course, our stationary charges will now seem to move.
What Feynman finds – and his calculations are very easy and straightforward – is that, while we will obviously insert different input values into Maxwell’s set of equations and, hence, get different values for the E and B fields, the actual physical effect – i.e. the final Lorentz force on a (charged) particle – will be the same. To be very specific, in a coordinate system at rest with respect to the wire (so we see charges move in the wire), we find a ‘magnetic’ force indeed, but in a coordinate system moving at the same speed of those charges, we will find an ‘electric’ force only. And from yet another reference frame, we will find a mixture of E and B fields. However, the physical result is the same: there is only one combined force in the end – the Lorentz force F = q(E + v×B) – and it’s always the same, regardless of the reference frame (inertial or moving at whatever speed – relativistic (i.e. close to c) or not).
In other words, Maxwell’s description of electromagnetism is invariant or, to say exactly the same in yet other words, electricity and magnetism taken together are consistent with relativity: they are part of one physical phenomenon: the electromagnetic interaction between (charged) particles. So electric and magnetic fields appear in different ‘mixtures’ if we change our frame of reference, and so that’s why magnetism is often described as a ‘relativistic’ effect – although that’s not very accurate. However, it does explain that cfactor in the equation for the curl of B. [How exactly? Well… If you’re smart enough to ask that kind of question, you will be smart enough to find the derivation on the Web. :-)]
Note: Don’t think we’re talking astronomical speeds here when comparing the two reference frames. It would also work for astronomical speeds but, in this case, we are talking the speed of the electrons moving through a wire. Now, the so-called drift velocity of electrons – which is the one we have to use here – in a copper wire of radius 1 mm carrying a steady current of 3 Amps is only about 1 m per hour! So the relativistic effect is tiny – but still measurable !
2. The second thing I want to note is that Maxwell’s set of equations with non-zero time derivatives for E and B clearly show that it’s changing electric and magnetic fields that sort of create each other, and it’s this that’s behind electromagnetic waves moving through space without losing energy. They just travel on and on. The math behind this is beautiful (and the animations in the related Wikipedia articles are equally beautiful – and probably easier to understand than the equations), but that’s stuff for another post. As the electric field changes, it induces a magnetic field, which then induces a new electric field, etc., allowing the wave to propagate itself through space. I should also note here that the energy is in the field and so, when electromagnetic waves, such as light, or radiowaves, travel through space, they carry their energy with them.
Let me be fully complete here, and note that there’s energy in electrostatic fields as well, and the formula for it is remarkably beautiful. The total (electrostatic) energy U in an electrostatic field generated by charges located within some finite distance is equal to:
Energy of electrostatic field
This equation introduces the electrostatic potential. This is a scalar field Φ from which we can derive the electric field vector just by applying the gradient operator. In fact, all curl-free fields (such as the electric field in this case) can be written as the gradient of some scalar field. That’s a universal truth. See how beautiful math is? 🙂
End of the Road to Reality?
Or the end of theoretical physics?
In my previous post, I mentioned the Goliath of science and engineering: the Large Hadron Collider (LHC), built by the European Organization for Nuclear Research (CERN) under the Franco-Swiss border near Geneva. I actually started uploading some pictures, but then I realized I should write a separate post about it. So here we go.
The first image (see below) shows the LHC tunnel, while the other shows (a part of) one of the two large general-purpose particle detectors that are part of this Large Hadron Collider. A detector is the thing that’s used to look at those collisions. This is actually the smallest of the two general-purpose detectors: it’s the so-called CMS detector (the other one is the ATLAS detector), and it’s ‘only’ 21.6 meter long and 15 meter in diameter – and it weighs about 12,500 tons. But so it did detect a Higgs particle – just like the ATLAS detector. [That’s actually not 100% sure but it was sure enough for the Nobel Prize committee – so I guess that should be good enough for us common mortals :-)]
LHC tunnelLHC - CMS detector
image of collision
The picture above shows one of these collisions in the CMS detector. It’s not the one with the trace of the Higgs particle though. In fact, I have not found any image that actually shows the Higgs particle: the closest thing to such image are some impressionistic images on the ATLAS site. See
In case you wonder what’s being scattered here… Well… All kinds of things – but so the original collision is usually between protons (so these are hydrogen ions: Hnuclei), although the LHC can produce other nucleon beams as well (collectively referred to as hadrons). These protons have energy levels of 4 TeV (tera-electronVolt: 1 TeV = 1000 GeV = 1 trillion eV = 1×1012 eV).
Now, let’s think about scale once again. Remember (from that same previous post) that we calculated a wavelength of 0.33 nanometer (1 nm = 1×10–9 m, so that’s a billionth of a meter) for an electron. Well, this LHC is actually exploring the sub-femtometer (fm) frontier. One femtometer (fm) is 1×10–15 m so that’s another million times smaller. Yes: so we are talking a millionth of a billionth of a meter. The size of a proton is an estimated 1.7 femtometer indeed and, as you surely know, a proton is a point-like thing occupying a very tiny space, so it’s not like an electron ‘cloud’ swirling around: it’s much smaller. In fact, quarks – three of them make up a proton (or a neutron) – are usually thought of as being just a little bit less than half that size – so that’s about 0.7 fm.
It may also help you to use the value I mentioned for high-energy electrons when I was discussing the LEP (the Large Electron-Positron Collider, which preceded the LHC) – so that was 104.5 GeV – and calculate the associated de Broglie wavelength using E = hf and λ = v/f. The velocity is close to and, hence, if we plug everything in, we get a value close to 1.2×10–15 m indeed, so that’s the femtometer scale indeed. [If you don’t want to calculate anything, then just note we’re going from eV to giga-eV energy levels here, and so our wavelength decreases accordingly: one billion times smaller. Also remember (from the previous posts) that we calculated a wavelength of 0.33×10–6 m and an associated energy level of 70 eV for a slow-moving electron – i.e. one going at 2200 km per second ‘only’, i.e. less than 1% of the speed of light.] Also note that, at these energy levels, it doesn’t matter whether or not we include the rest mass of the electron: 0.511 MeV is nothing as compared to the GeV realm. In short, we are talking very very tiny stuff here.
But so that’s the LEP scale. I wrote that the LHC is probing things at the sub-femtometer scale. So how much sub-something is that? Well… Quite a lot: the LHC is looking at stuff at a scale that’s more than a thousand times smaller. Indeed, if collision experiments in the giga-electronvolt (GeV) energy range correspond to probing stuff at the femtometer scale, then tera-electronvolt (TeV) energy levels correspond to probing stuff that’s, once again, another thousand times smaller, so we’re looking at distances of less than a thousandth of a millionth of a billionth of a meter. Now, you can try to ‘imagine’ that, but you can’t really.
So what do we actually ‘see’ then? Well… Nothing much one could say: all we can ‘see’ are traces of point-like ‘things’ being scattered, which then disintegrate or just vanish from the scene – as shown in the image above. In fact, as mentioned above, we do not even have such clear-cut ‘trace’ of a Higgs particle: we’ve got a ‘kinda signal’ only. So that’s it? Yes. But then these images are beautiful, aren’t they? If only to remind ourselves that particle physics is about more than just a bunch of formulas. It’s about… Well… The essence of reality: its intrinsic nature so to say. So… Well…
Let me be skeptical. So we know all of that now, don’t we? The so-called Standard Model has been confirmed by experiment. We now know how Nature works, don’t we? We observe light (or, to be precise, radiation: most notably that cosmic background radiation that reaches us from everywhere) that originated nearly 14 billion years ago (to be precise: 380,000 years after the Big Bang – but what’s 380,000 years on this scale?) and so we can ‘see’ things that are 14 billion light-years away. In fact, things that were 14 billion light-years away: indeed, because of the expansion of the universe, they are further away now and so that’s why the so-called observable universe is actually larger. So we can ‘see’ everything we need to ‘see’ at the cosmic distance scale and now we can also ‘see’ all of the particles that make up matter, i.e. quarks and electrons mainly (we also have some other so-called leptons, like neutrinos and muons), and also all of the particles that make up anti-matter of course (i.e. antiquarks, positrons etcetera). As importantly – or even more – we can also ‘see’ all of the ‘particles’ carrying the forces governing the interactions between the ‘matter particles’ – which are collectively referred to as fermions, as opposed to the ‘force carrying’ particles, which are collectively referred to as bosons (see my previous post on Bose and Fermi). Let me quickly list them – just to make sure we’re on the same page:
1. Photons for the electromagnetic force.
2. Gluons for the so-called strong force, which explains why positively charged protons ‘stick’ together in nuclei – in spite of their electric charge, which should push them away from each other. [You might think it’s the neutrons that ‘glue’ them together but so, no, it’s the gluons.]
3. W+, W, and Z bosons for the so-called ‘weak’ interactions (aka as Fermi’s interaction), which explain how one type of quark can change into another, thereby explaining phenomena such as beta decay. [For example, carbon-14 will – through beta decay – spontaneously decay into nitrogen-14. Indeed, carbon-12 is the stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ 🙂 and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). As for the name, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) as well as βdecay (e.g. magnesium-23 into sodium-23). There’s also alpha and gamma decay but that involves different things. In any case… Let me end this digression within the digression.]
4. Finally, the existence of the Higgs particle – and, hence, of the associated Higgs field – has been predicted since 1964 already, but so it was only experimentally confirmed (i.e. we saw it, in the LHC) last year, so Peter Higgs – and a few others of course – got their well-deserved Nobel prize only 50 years later. The Higgs field gives fermions, and also the W+, W, and Z bosons, mass (but not photons and gluons, and so that’s why the weak force has such short range – as compared to the electromagnetic and strong forces).
So there we are. We know it all. Sort of. Of course, there are many questions left – so it is said. For example, the Higgs particle does actually not explain the gravitational force, so it’s not the (theoretical) graviton, and so we do not have a quantum field theory for the gravitational force. [Just Google it and you’ll see why: there’s theoretical as well as practical (experimental) reasons for that.] Secondly, while we do have a quantum field theory for all of the forces (or ‘interactions’ as physicists prefer to call them), there are a lot of constants in them (much more than just that Planck constant I introduced in my posts!) that seem to be ‘unrelated and arbitrary.’ I am obviously just quoting Wikipedia here – but it’s true.
Just look at it: three ‘generations’ of matter with various strange properties, four force fields (and some ‘gauge theory’ to provide some uniformity), bosons that have mass (the W+, W, and Z bosons, and then the Higgs particle itself) but then photons and gluons don’t… It just doesn’t look good, and then Feynman himself wrote, just a few years before his death (QED, 1985, p. 128), that the math behind calculating some of these constants (the coupling constant j for instance, or the rest mass n of an electron), which he actually invented (it makes use of a mathematical approximation method called perturbation theory) and for which he got a Nobel Prize, is a “dippy process” and that “having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent“. He adds: “It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization [“the shell game that we play to find n and j” as he calls it] is not mathematically legitimate.” And so he writes this about quantum electrodynamics, not about “the rest of physics” (and so that’s quantum chromodynamics (QCD) – the theory of the strong interactions – and quantum flavordynamics (QFD) – the theory of weak interactions) which, he adds, “has not been checked anywhere near as well as electrodynamics.”
Waw ! That’s a pretty damning statement, isn’t it? In short, all of the celebrations around the experimental confirmation of the Higgs particle cannot hide the fact that it all looks a bit messy. There are other questions as well – most of which I don’t understand so I won’t mention them. To make a long story short, physicists and mathematicians alike seem to think there must be some ‘more fundamental’ theory behind. But – Hey! – you can’t have it all, can you? And, of course, all these theoretical physicists and mathematicians out there do need to justify their academic budget, don’t they? And so all that talk about a Grand Unification Theory (GUT) is probably just what is it: talk. Isn’t it? Maybe.
The key question is probably easy to formulate: what’s beyond this scale of a thousandth of a proton diameter (0.001×10–15 m) – a thousandth of a millionth of a billionth of a meter that is. Well… Let’s first note that this so-called ‘beyond’ is a ‘universe’ which mankind (or let’s just say ‘we’) will never see. Never ever. Why? Because there is no way to go substantially beyond the 4 TeV energy levels that were reached last year – at great cost – in the world’s largest particle collider (the LHC). Indeed, the LHC is widely regarded not only as “the most complex and ambitious scientific project ever accomplished by humanity” (I am quoting a CERN scientist here) but – with a cost of more than 7.5 billion Euro – also as one of the most expensive ones. Indeed, taking into account inflation and all that, it was like the Manhattan project indeed (although scientists loathe that comparison). So we should not have any illusions: there will be no new super-duper LHC any time soon, and surely not during our lifetime: the current LHC is the super-duper thing!
Indeed, when I write ‘substantially‘ above, I really mean substantially. Just to put things in perspective: the LHC is currently being upgraded to produce 7 TeV beams (it was shut down for this upgrade, and it should come back on stream in 2015). That sounds like an awful lot (from 4 to 7 is +75%), and it is: it amounts to packing the kinetic energy of seven flying mosquitos (instead of four previously :-)) into each and every particle that makes up the beam. But that’s not substantial, in the sense that it is very much below the so-called GUT energy scale, which is the energy level above which, it is believed (by all those GUT theorists at least), the electromagnetic force, the weak force and the strong force will all be part and parcel of one and the same unified force. Don’t ask me why (I’ll know when I finished reading Penrose, I hope) but that’s what it is (if I should believe what I am reading currently that is). In any case, the thing to remember is that the GUT energy levels are in the 1016 GeV range, so that’s – sorry for all these numbers – a trillion TeV. That amounts to pumping more than 160,000 Joule in each of those tiny point-like particles that make up our beam. So… No. Don’t even try to dream about it. It won’t happen. That’s science fiction – with the emphasis on fiction. [Also don’t dream about a trillion flying mosquitos packed into one proton-sized super-mosquito either. :-)]
So what?
Well… I don’t know. Physicists refer to the zone beyond the above-mentioned scale (so things smaller than 0.001×10–15 m) as the Great Desert. That’s a very appropriate name I think – for more than one reason. And so it’s this ‘desert’ that Roger Penrose is actually trying to explore in his ‘Road to Reality’. As for me, well… I must admit I have great trouble following Penrose on this road. I’ve actually started to doubt that Penrose’s Road leads to Reality. Maybe it takes us away from it. Huh? Well… I mean… Perhaps the road just stops at that 0.001×10–15 m frontier?
In fact, that’s a view which one of the early physicists specialized in high-energy physics, Raoul Gatto, referred to as the zeroth scenarioI am actually not quoting Gatto here, but another theoretical physicist: Gerard ‘t Hooft, another Nobel prize winner (you may know him better because he’s a rather fervent Mars One supporter, but so here I am referring to his popular 1996 book In Search of the Ultimate Building Blocks). In any case, Gatto, and most other physicists, including ‘T Hooft (despite the fact ‘T Hooft got his Nobel prize for his contribution to gauge theory – which, together with Feynman’s application of perturbation theory to QED, is actually the backbone of the Standard Model) firmly reject this zeroth scenario. ‘T Hooft himself thinks superstring theory (i.e. supersymmetric string theory – which has now been folded into M-theory or – back to the original term – just string theory – the terminology is quite confusing) holds the key to exploring this desert.
But who knows? In fact, we can’t – because of the above-mentioned practical problem of experimental confirmation. So I am likely to stay on this side of the frontier for quite a while – if only because there’s still so much to see here and, of course, also because I am just at the beginning of this road. 🙂 And then I also realize I’ll need to understand gauge theory and all that to continue on this road – which is likely to take me another six months or so (if not more) and then, only then, I might try to look at those little strings, even if we’ll never see them because… Well… Their theoretical diameter is the so-called Planck length. So what? Well… That’s equal to 1.6×10−35 m. So what? Well… Nothing. It’s just that 1.6×10−35 m is 1/10 000 000 000 000 000 of that sub-femtometer scale. I don’t even want to write this in trillionths of trillionths of trillionths etcetera because I feel that’s just not making any sense. And perhaps it doesn’t. One thing is for sure: that ‘desert’ that GUT theorists want us to cross is not just ‘Great’: it’s ENORMOUS!
Richard Feynman – another Nobel Prize scientist whom I obviously respect a lot – surely thought trying to cross a desert like that amounts to certain death. Indeed, he’s supposed to have said the following about string theorists, about a year or two before he died (way too young): I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say, “Well, it might be true.” For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s all possible mathematically, but why not seven? When they write their equation, the equation should decide how many of these things get wrapped up, not the desire to agree with experiment. In other words, there’s no reason whatsoever in superstring theory that it isn’t eight out of the ten dimensions that get wrapped up and that the result is only two dimensions, which would be completely in disagreement with experience. So the fact that it might disagree with experience is very tenuous, it doesn’t produce anything; it has to be excused most of the time. It doesn’t look right.”
Hmm… Feynman and ‘T Hooft… Two giants in science. Two Nobel Prize winners – and for stuff that truly revolutionized physics. The amazing thing is that those two giants – who are clearly at loggerheads on this one – actually worked closely together on a number of other topics – most notably on the so-called Feynman-‘T Hooft gauge, which – as far as I understand – is the one that is most widely used in quantum field calculations. But I’ll leave it at that here – and I’ll just make a mental note of the terminology here. The Great Desert… Probably an appropriate term. ‘T Hooft says that most physicists think that desert is full of tiny flowers. I am not so sure – but then I am not half as smart as ‘T Hooft. Much less actually. So I’ll just see where the road I am currently following leads me. With Feynman’s warning in mind, I should probably expect the road condition to deteriorate quickly.
Post scriptum: You will not be surprised to hear that there’s a word for 1×10–18 m: it’s called an attometer (with two t’s, and abbreviated as am). And beyond that we have zeptometer (1 zm = 1×10–21 m) and yoctometer (1 ym = 1×10–23 m). In fact, these measures actually represent something: 20 yoctometer is the estimated radius of a 1 MeV neutrino – or, to be precise, its the radius of the cross section, which is “the effective area that governs the probability of some scattering or absorption event.” But so then there are no words anymore. The next measure is the Planck length: 1.62 × 10−35 m – but so that’s a trillion (1012) times smaller than a yoctometer. Unimaginable, isn’t it? Literally.
Note: A 1 MeV neutrino? Well… Yes. The estimated rest mass of an (electron) neutrino is tiny: at least 50,000 times smaller than the mass of the electron and, therefore, neutrinos are often assumed to be massless, for all practical purposes that is. However, just like the massless photon, they can carry high energy. High-energy gamma ray photons, for example, are also associated with MeV energy levels. Neutrinos are one of the many particles produced in high-energy particle collisions in particle accelerators, but they are present everywhere: they’re produced by stars (which, as you know, are nuclear fusion reactors). In fact, most neutrinos passing through Earth are produced by our Sun. The largest neutrino detector on Earth is called IceCube. It sits on the South Pole – or under it, as it’s suspended under the Antarctic ice, and it regularly captures high-energy neutrinos in the range of 1 to 10 TeV. Last year (in November 2013), it captured two with energy levels around 1000 TeV – so that’s the peta-electronvolt level (1 PeV = 1×1015 eV). If you think that’s amazing, it is. But also remember that 1 eV is 1.6×10−19 Joule, so it’s ‘only’ a ten-thousandth of a Joule. In other words, you would need at least ten thousand of them to briefly light up an LED. The PeV pair was dubbed Bert and Ernie and the illustration below (from IceCube’s website) conveys how the detectors sort of lit up when they passed. It was obviously a pretty clear ‘signal’ – but so the illustration also makes it clear that we don’t really ‘see’ at such small scale: we just know ‘something’ happened.
Bert and Ernie
The Uncertainty Principle re-visited: Fourier transforms and conjugate variables
In previous posts, I presented a time-independent wave function for a particle (or wavicle as we should call it – but so that’s not the convention in physics) – let’s say an electron – traveling through space without any external forces (or force fields) acting upon it. So it’s just going in some random direction with some random velocity v and, hence, its momentum is p = mv. Let me be specific – so I’ll work with some numbers here – because I want to introduce some issues related to units for measurement.
So the momentum of this electron is the product of its mass m (about 9.1×10−28 grams) with its velocity v (typically something in the range around 2,200 km/s, which is fast but not even close to the speed of light – and, hence, we don’t need to worry about relativistic effects on its mass here). Hence, the momentum p of this electron would be some 20×10−25 kg·m/s. Huh? Kg·m/s?Well… Yes, kg·m/s or N·s are the usual measures of momentum in classical mechanics: its dimension is [mass][length]/[time] indeed. However, you know that, in atomic physics, we don’t want to work with these enormous units (because we then always have to add these ×10−28 and ×10−25 factors and so that’s a bit of a nuisance indeed). So the momentum p will usually be measured in eV/c, with c representing what it usually represents, i.e. the speed of light. Huh? What’s this strange unit? Electronvolts divided by c? Well… We know that eV is an appropriate unit for measuring energy in atomic physics: we can express eV in Joule and vice versa: 1 eV = 1.6×10−19 Joule, so that’s OK – except for the fact that this Joule is a monstrously large unit at the atomic scale indeed, and so that’s why we prefer electronvolt. But the Joule is a shorthand unit for kg·m2/s2, which is the measure for energy expressed in SI units, so there we are: while the SI dimension for energy is actually [mass][length]2/[time]2, using electronvolts (eV) is fine. Now, just divide the SI dimension for energy, i.e. [mass][length]2/[time]2, by the SI dimension for velocity, i.e. [length]/[time]: we get something expressed in [mass][length]/[time]. So that’s the SI dimension for momentum indeed! In other words, dividing some quantity expressed in some measure for energy (be it Joules or electronvolts or erg or calories or coulomb-volts or BTUs or whatever – there’s quite a lot of ways to measure energy indeed!) by the speed of light (c) will result in some quantity with the right dimensions indeed. So don’t worry about it. Now, 1 eV/c is equivalent to 5.344×10−28 kg·m/s, so the momentum of this electron will be 3.75 eV/c.
Let’s go back to the main story now. Just note that the momentum of this electron that we are looking at is a very tiny amount – as we would expect of course.
Time-independent means that we keep the time variable (t) in the wave function Ψ(x, t) fixed and so we only look at how Ψ(x, t) varies in space, with x as the (real) space variable representing position. So we have a simplified wave function Ψ(x) here: we can always put the time variable back in when we’re finished with the analysis. By now, it should also be clear that we should distinguish between real-valued wave functions and complex-valued wave functions. Real-valued wave functions represent what Feynman calls “real waves”, like a sound wave, or an oscillating electromagnetic field. Complex-valued wave functions describe probability amplitudes. They are… Well… Feynman actually stops short of saying that they are not real. So what are they?
They are, first and foremost complex numbers, so they have a real and a so-called imaginary part (z = a + ib or, if we use polar coordinates, reθ = cosθ + isinθ). Now, you may think – and you’re probably right to some extent – that the distinction between ‘real’ waves and ‘complex’ waves is, perhaps, less of a dichotomy than popular writers – like me 🙂 – suggest. When describing electromagnetic waves, for example, we need to keep track of both the electric field vector E as well as the magnetic field vector B (both are obviously related through Maxwell’s equations). So we have two components as well, so to say, and each of these components has three dimensions in space, and we’ll use the same mathematical tools to describe them (so we will also represent them using complex numbers). That being said, these probability amplitudes usually denoted by Ψ(x), describe something very different. What exactly? Well… By now, it should be clear that that is actually hard to explain: the best thing we can do is to work with them, so they start feeling familiar. The main thing to remember is that we need to square their modulus (or magnitude or absolute value if you find these terms more comprehensible) to get a probability (P). For example, the expression below gives the probability of finding a particle – our electron for example – in in the (space) interval [a, b]:
probability versus amplitude
Of course, we should not be talking intervals but three-dimensional regions in space. However, we’ll keep it simple: just remember that the analysis should be extended to three (space) dimensions (and, of course, include the time dimension as well) when we’re finished (to do that, we’d use so-called four-vectors – another wonderful mathematical invention).
Now, we also used a simple functional form for this wave function, as an example: Ψ(x) could be proportional, we said, to some idealized function eikx. So we can write: Ψ(x) ∝ eikx (∝ is the standard symbol expressing proportionality). In this function, we have a wave number k, which is like the frequency in space of the wave (but then measured in radians because the phase of the wave function has to be expressed in radians). In fact, we actually wrote Ψ(x, t) = (1/x)ei(kx – ωt) (so the magnitude of this amplitude decreases with distance) but, again, let’s keep it simple for the moment: even with this very simple function eikx , things will become complex enough.
We also introduced the de Broglie relation, which gives this wave number k as a function of the momentum p of the particle: k = p/ħ, with ħ the (reduced) Planck constant, i.e. a very tiny number in the neighborhood of 6.582 ×10−16 eV·s. So, using the numbers above, we’d have a value for k equal to 3.75 eV/c divided by 6.582 ×10−16 eV·s. So that’s 0.57×1016 (radians) per… Hey, how do we do it with the units here? We get an incredibly huge number here (57 with 14 zeroes after it) per second? We should get some number per meter because k is expressed in radians per unit distance, right? Right. We forgot c. We are actually measuring distance here, but in light-seconds instead of meter: k is 0.57×1016/s. Indeed, a light-second is the distance traveled by light in one second, so that’s s, and if we want k expressed in radians per meter, then we need to divide this huge number 0.57×1016 (in rad) by 2.998×108 ( in (m/s)·s) and so then we get a much more reasonable value for k, and with the right dimension too: to be precise, k is about 19×106 rad/m in this case. That’s still huge: it corresponds with a wavelength of 0.33 nanometer (1 nm = 10-6 m) but that’s the correct order of magnitude indeed.
[In case you wonder what formula I am using to calculate the wavelength: it’s λ = 2π/k. Note that our electron’s wavelength is more than a thousand times shorter than the wavelength of (visible) light (we humans can see light with wavelengths ranging from 380 to 750 nm) but so that’s what gives the electron its particle-like character! If we would increase their velocity (e.g. by accelerating them in an accelerator, using electromagnetic fields to propel them to speeds closer to and also to contain them in a beam), then we get hard beta rays. Hard beta rays are surely not as harmful as high-energy electromagnetic rays. X-rays and gamma rays consist of photons with wavelengths ranging from 1 to 100 picometer (1 pm = 10–12 m) – so that’s another factor of a thousand down – and thick lead shields are needed to stop them: they are the cause of cancer (Marie Curie’s cause of death), and the hard radiation of a nuclear blast will always end up killing more people than the immediate blast effect. In contrast, hard beta rays will cause skin damage (radiation burns) but they won’t go deeper than that.]
Let’s get back to our wave function Ψ(x) ∝ eikx. When we introduced it in our previous posts, we said it could not accurately describe a particle because this wave function (Ψ(x) = Aeikx) is associated with probabilities |Ψ(x)|2 that are the same everywhere. Indeed, |Ψ(x)|2 = |Aeikx|2 = A2. Apart from the fact that these probabilities would add up to infinity (so this mathematical shape is unacceptable anyway), it also implies that we cannot locate our electron somewhere in space. It’s everywhere and that’s the same as saying it’s actually nowhere. So, while we can use this wave function to explain and illustrate a lot of stuff (first and foremost the de Broglie relations), we actually need something different if we would want to describe anything real (which, in the end, is what physicists want to do, right?). We already said in our previous posts: real particles will actually be represented by a wave packet, or a wave train. A wave train can be analyzed as a composite wave consisting of a (potentially infinite) number of component waves. So we write:
Composite wave
Note that we do not have one unique wave number k or – what amounts to saying the same – one unique value p for the momentum: we have n values. So we’re introducing a spread in the wavelength here, as illustrated below:
Explanation of uncertainty principle
In fact, the illustration above talks of a continuous distribution of wavelengths and so let’s take the continuum limit of the function above indeed and write what we should be writing:
Composite wave - integral
Now that is an interesting formula. [Note that I didn’t care about normalization issues here, so it’s not quite what you’d see in a more rigorous treatment of the matter. I’ll correct that in the Post Scriptum.] Indeed, it shows how we can get the wave function Ψ(x) from some other function Φ(p). We actually encountered that function already, and we referred to it as the wave function in the momentum space. Indeed, Nature does not care much what we measure: whether it’s position (x) or momentum (p), Nature will not share her secrets with us and, hence, the best we can do – according to quantum mechanics – is to find some wave function associating some (complex) probability amplitude with each and every possible (real) value of x or p. What the equation above shows, then, is these wave functions come as a pair: if we have Φ(p), then we can calculate Ψ(x) – and vice versa. Indeed, the particular relation between Ψ(x) and Φ(p) as established above, makes Ψ(x) and Φ(p) a so-called Fourier transform pair, as we can transform Φ(p) into Ψ(x) using the above Fourier transform (that’s how that integral is called), and vice versa. More in general, a Fourier transform pair can be written as:
Fourier transform pair
Instead of x and p, and Ψ(x) and Φ(p), we have x and y, and f(x) and g(y), in the formulas above, but so that does not make much of a difference when it comes to the interpretation: x and p (or x and y in the formulas above) are said to be conjugate variables. What it means really is that they are not independent. There are quite a few of such conjugate variables in quantum mechanics such as, for example: (1) time and energy (and time and frequency, of course, in light of the de Broglie relation between both), and (2) angular momentum and angular position (or orientation). There are other pairs too but these involve quantum-mechanical variables which I do not understand as yet and, hence, I won’t mention them here. [To be complete, I should also say something about that 1/2π factor, but so that’s just something that pops up when deriving the Fourier transform from the (discrete) Fourier series on which it is based. We can put it in front of either integral, or split that factor across both. Also note the minus sign in the exponent of the inverse transform.]
When you look at the equations above, you may think that f(x) and g(y) must be real-valued functions. Well… No. The Fourier transform can be used for both real-valued as well as complex-valued functions. However, at this point I’ll have to refer those who want to know each and every detail about these Fourier transforms to a course in complex analysis (such as Brown and Churchill’s Complex Variables and Applications (2004) for instance) or, else, to a proper course on real and complex Fourier transforms (they are used in signal processing – a very popular topic in engineering – and so there’s quite a few of those courses around).
The point to note in this post is that we can derive the Uncertainty Principle from the equations above. Indeed, the (complex-valued) functions Ψ(x) and Φ(p) describe (probability) amplitudes, but the (real-valued) functions |Ψ(x)|2 and |Φ(p)|2 describe probabilities or – to be fully correct – they are probability (density) functions. So it is pretty obvious that, if the functions Ψ(x) and Φ(p) are a Fourier transform pair, then |Ψ(x)|2 and |Φ(p)|2 must be related to. They are. The derivation is a bit lengthy (and, hence, I will not copy it from the Wikipedia article on the Uncertainty Principle) but one can indeed derive the so-called Kennard formulation of the Uncertainty Principle from the above Fourier transforms. This Kennard formulation does not use this rather vague Δx and Δp symbols but clearly states that the product of the standard deviation from the mean of these two probability density functions can never be smaller than ħ/2:
σxσ≥ ħ/2
To be sure: ħ/2 is a rather tiny value, as you should know by now, 🙂 but, so, well… There it is.
As said, it’s a bit lengthy but not that difficult to do that derivation. However, just for once, I think I should try to keep my post somewhat shorter than usual so, to conclude, I’ll just insert one more illustration here (yes, you’ve seen that one before), which should now be very easy to understand: if the wave function Ψ(x) is such that there’s relatively little uncertainty about the position x of our electron, then the uncertainty about its momentum will be huge (see the top graphs). Vice versa (see the bottom graphs), precise information (or a narrow range) on its momentum, implies that its position cannot be known.
Does all this math make it any easier to understand what’s going on? Well… Yes and no, I guess. But then, if even Feynman admits that he himself “does not understand it the way he would like to” (Feynman Lectures, Vol. III, 1-1), who am I? In fact, I should probably not even try to explain it, should I? 🙂
So the best we can do is try to familiarize ourselves with the language used, and so that’s math for all practical purposes. And, then, when everything is said and done, we should probably just contemplate Mario Livio’s question: Is God a mathematician? 🙂
Post scriptum:
I obviously cut corners above, and so you may wonder how that ħ factor can be related to σand σ if it doesn’t appear in the wave functions. Truth be told, it does. Because of (i) the presence of ħ in the exponent in our ei(p/ħ)x function, (ii) normalization issues (remember that probabilities (i.e. Ψ|(x)|2 and |Φ(p)|2) have to add up to 1) and, last but not least, (iii) the 1/2π factor involved in Fourier transforms , Ψ(x) and Φ(p) have to be written as follows:
Position and momentum wave functionNote that we’ve also re-inserted the time variable here, so it’s pretty complete now. One more thing we could do is to substitute x for a proper three-dimensional space vector or, better still, introduce four-vectors, which would allow us to also integrate relativistic effects (most notably the slowing of time with motion – as observed from the stationary reference frame) – which become important when, for instance, we’re looking at electrons being accelerated, which is the rule, rather than the exception, in experiments.
Remember (from a previous post) that we calculated that an electron traveling at its usual speed in orbit (2200 km/s, i.e. less than 1% of the speed of light) had an energy of about 70 eV? Well, the Large Electron-Positron Collider (LEP) did accelerate them to speeds close to light, thereby giving them energy levels topping 104.5 billion eV (or 104.5 GeV as it’s written) so they could hit each other with collision energies topping 209 GeV (they come from opposite directions so it’s two times 104.5 GeV). Now, 209 GeV is tiny when converted to everyday energy units: 209 GeV is 33×10–9 Joule only indeed – and so note the minus sign in the exponent here: we’re talking billionths of a Joule here. Just to put things into perspective: 1 Watt is the energy consumption of an LED (and 1 Watt is 1 Joule per second), so you’d need to combine the energy of billions of these fast-traveling electrons to power just one little LED lamp. But, of course, that’s not the right comparison: 104.5 GeV is more than 200,000 times the electron’s rest mass (0.511 MeV), so that means that – in practical terms – their mass (remember that mass is a measure for inertia) increased by the same factor (204,500 times to be precise). Just to give an idea of the effort that was needed to do this: CERN’s LEP collider was housed in a tunnel with a circumference of 27 km. Was? Yes. The tunnel is still there but it now houses the Large Hadron Collider (LHC) which, as you surely know, is the world’s largest and most powerful particle accelerator: its experiments confirmed the existence of the Higgs particle in 2013, thereby confirming the so-called Standard Model of particle physics. [But I’ll see a few things about that in my next post.]
Oh… And, finally, in case you’d wonder where we get the inequality sign in σxσ≥ ħ/2, that’s because – at some point in the derivation – one has to use the Cauchy-Schwarz inequality (aka as the triangle inequality): |z1+ z1| ≤ |z1|+| z1|. In fact, to be fully complete, the derivation uses the more general formulation of the Cauchy-Schwarz inequality, which also applies to functions as we interpret them as vectors in a function space. But I would end up copying the whole derivation here if I add any more to this – and I said I wouldn’t do that. 🙂 […] |
3c8a2e386ba897e9 | TOWNSEND -- There is little doubt that music is one of the most powerful tools for changing brain chemistry; it can make you euphoric, and it can give you the blues.
But on Thursday, March 6, at 7 p.m., chemistry and the blues come together in a unique pairing of a blues guitar-playing UCLA chemistry professor and a former special-education teacher turned blues guitarist.
The event, titled "Elements of the Blues," will take place in the North Middlesex Regional High School auditorium, and will feature UCLA chemistry professor Dr. Eric Scerri and W.C. Handy Blues Award winner Ronnie Earl.
This unusual pairing was the brainchild of Earl's wife, NMRHS chemistry teacher Donna Horvath.
"I have been teaching chemistry for several years now," said Horvath in a recent interview, "and I have observed that some kids like some parts of it and not others. But they seem to have an affinity with the periodic table. Even the least interested students seem to form an attachment to it."
Horvath said that she had read Scerri's June 2013 "Scientific American" article "Cracks in the Periodic Table," which discusses a possible new staircase form of the periodic table of the elements.
The staircase on the right side of the periodic table is a dividing line between metals and nonmetals. Scerri's article addresses the fact that some recent additions to the table may differ in their chemistry from the other elements in the same column, thus breaking the periodic rule that had defined the table for the past 150 years.
"I just really love the article, because we talk about the new elements and we are starting a new row now" in class, Horvath said.
A brief biography accompanying the article states that Scerri is not only a historian and philosopher of chemistry at UCLA, but also that he is a serious blues guitarist.
"I read his bio and wondered if I could make a personal connection with him and use my husband," Horvath revealed.
She contacted Scerri and told him how she would use his article in her class, and asked if he would consider making a trip to the east coast.
"He said that he would and that his wife would love to visit Boston," she said.
The Bluesman
Horvath's husband was born Ronald Horvath in Queens, New York, a first-generation American of Hungarian Jewish parents.
Earl eventually moved to Boston to pursue a degree in Special Education at Boston University, but also took an interest in guitar.
This, said Horvath, despite an early report card of Earl's that stated that he had no aptitude in music.
"He took piano as a child, but it was when he saw the Beatles that he knew he wanted to play guitar," Horvath said.
"He thought it was so cool to play the guitar, but his parents wanted him to have a respectable career. Then, he went to a concert and heard B.B. King and Freddie King, and it really resonated with him."
He eventually joined Roomful of Blues, and later formed the band Ronnie Earl and the Broadcasters. He has played with Eric Clapton, B.B. King, Muddy Waters, and many other blues greats, and has released nearly 30 albums during his 30-plus years as a professional musician.
Still, in between gigs, Earl enjoys coming to her school to talk with her students, Horvath said. "He talks about what it means to be an artist and create music. Students may have a real passion for music, but that may not be their avocation. Ronnie talks with them about how people can blend their avocations with their jobs."
Scerri, she discovered, chose to teach at UCLA because he wanted music in his life. "And Ronnie had to make a choice, too. He gave up his job (teaching special education) to spend more time on music."
The Professor
In a recent coast-to-coast phone interview, Scerri said that he is delighted by the way the upcoming program has come together.
"I have known of his playing for many years," he said of Earl, "and I cannot wait to jam with him."
And just what does music have to do with the periodic table of the elements?
"The history of the periodic table, the topic that I have published three books on, has featured a curious incident involving chemistry and music," Scerri replied.
"The London chemist, John Newlands, first proposed his law of octaves in the 1860s, and made an analogy with musical octaves whereby notes repeat after a certain interval just as elements seem to in the periodic table. When he presented this idea he was mocked by London's leading chemists," Scerri went on. "But the analogy is essentially correct!"
"The periodic table is an arrangement of all the elements?the fundamental building blocks of nature. They are all very different and have characteristic properties, yet there is this underlying system that gathers them all together and makes sense of them."
Scerri said that during the program at the high school, he is mainly going to speak about the periodic table, but will also try to make a connection with music.
Besides the law of octaves, which found that each element was similar to the element eight places further on, there is a moving from the Bohr model of the atom to one of quantum mechanics known as the Schrödinger equation, which draws an analogy between physics and music, Scerri added.
Put forward by Erwin Schrödinger, this partial differential equation describes how the quantum state of some physical system changes with time.
"In order to understand that conceptual change," said Scerri, "I use the guitar and show how when you lightly touch the strings you can produce a harmonic."
When a string is fixed at both ends, it makes an open string. If you touch the string lightly on the twelfth fret and strum it, you get a harmonic, he said.
"By breaking up the string into halves, thirds, quarters, or fifths, it's a perfect analogy to the Schrödingerapproach to quantum mechanics," Scerri explained. "You just apply the math given those boundary conditions."
And the Blues?
"I absolutely love the blues. It's one of the reasons I moved to the states from England. During the '60s and '70s blues revival, the kids in England started listening to Eric Clapton, Jeff Beck, and The Rolling Stones, and then started listening to American blues when Americans were not listening to the blues. The Americans were listening to the British musicians, strangely," Scerri reminisced.
"For me, although I discovered blues in London, the British lost interest but the Americans retained it. That's one of the reasons I wanted to be in America. (The American blues) were not sophisticated; the expression is the sophistication," he said.
The Program
"The Elements of the Blues" will be held in the NMRHS auditorium, located at 19 Main Street, Townsend, from 7-9 p.m. on Thursday, March 6, with a snow date of Friday, March 7. It is free and open to the public, and the auditorium is wheelchair accessible.
For more information, contact Dr. Horvath at 978-587-8721, or
The program is supported, in part, by a grant from the local Cultural Councils of Ashby, Pepperell, and Townsend, and from the Amanda Dwight Entertainment Committee of Townsend, local agencies that are supported by the Massachusetts Cultural Council, a state agency. |
4b954a1b96227c01 | Dismiss Notice
Join Physics Forums Today!
Wave function requirements
1. Mar 13, 2005 #1
Hi, I have a question about the mathematical requirements of a wave function in a potential that is infinite at [tex]x \leq 0[/tex]. (At the other side it goes towards infinity at [tex]x = \infty[/tex].) Now, given a wave function in this potential that is zero for [tex]x = 0[/tex] and [tex]x = \infty[/tex]. Does it matter what that wavefunction is at [tex]x = -\infty[/tex]? I mean, I just figured you would have a wave function there that's zero all the way. Why will a wave function that goes to [tex]-\infty[/tex] at [tex]x = -\infty[/tex] not fit in the (time independent) Schrödinger equation, whereas one that goes to zero at [tex]-\infty[/tex] does? After all when we're normalizing it, we're just integrating from 0 to [tex]\infty[/tex] and doesn't really need to bother with it at negative x values. Or is that just some mathematical requirement that is independent of the physical properties? Can someone enlighten me, please?
2. jcsd
3. Mar 13, 2005 #2
User Avatar
Science Advisor
Homework Helper
Is this your problem:
"Solve the unidimensional SE for one particle in the the potential field:
[tex] U(x)=\left\{\begin{array}{c}+\infty,\mbox{for} \ x\in(-\infty,0]\\0,\mbox{for} \ x\in (0,+\infty)\end{array}\right [/tex]
,because you didn't say anything about the potential in the positive semiaxis...
4. Mar 13, 2005 #3
The potential is the harmonic oscillator on the positive semiaxis. The problem is what are the mathematical requirements for the wave function. Let's say you have a function [tex]\psi(x)[/tex], then what are the mathematical requirements that function need to meet in order to be a wavefunction for that potential?
5. Mar 13, 2005 #4
User Avatar
Science Advisor
Homework Helper
Physical states are described by normalizable wavefunctions...
In your case,on the negative semiaxis the wave function is zero and on the positive semiaxis is a Hermite polynomial.So i'd say this is normalizable.
Then comes the continuity of the wavefunction.Both 0 & Hermite Polynomials are continuous,however,at the point 0,the continuity must be enforced.
The first derivative issue is rather tricky.U may wanna consult a book how to deal with infinite potentials & the conditions imposed on the wavefunction.
6. Mar 14, 2005 #5
User Avatar
Science Advisor
Think about the following:
1. The eigenfunctions for the linear oscillator are strictly even or or odd.
2. For this problem, why should there be any boundary condition on the momentum, the first spatial derivative, at x=0, if two boundary conditions have already been imposed? (Think about a particle wave packet, in the oscillator well, moving toward the x=0 wall. What's going to happen at the wall?)
Reilly Atkinson
Similar Discussions: Wave function requirements
1. Wave function (Replies: 5)
2. Wave function (Replies: 5)
3. Wave function (Replies: 2)
4. +/- wave function (Replies: 5) |
41bcd48ff2bd5b1d | Ultrafast Field Emission Team
We are interested in
Manipulation of coherent electron waves
Ultrafast electron and phonon dynamics
Low-dimensional materials
Topological insulator
Quantum system
Atomic resolution microscopy
We are moving to Japan and looking for working students.
In the last decade, we have been working on ultrafast field emission.
What is field emission microscopy?
When we apply a high voltage to a metallic tip of nanometer sharpness (a nano-tip), electrons can be emitted from the tip’s apex to a vacuum via the electron tunneling processes. This is what is called field emission. The emitted electrons propagate radially from the tip apex. As a result, the emitted electrons magnify nanoscopic information on the apex to a macroscopic scale, which enables field emission microscopy (FEM).
This is a typical field emission pattern from a tungsten tip oriented towards the [011] direction. We can observe four nanoscale emission sites.
We can select the field emission sites.
Laser pulses induce electron emission from the same emission sites. But the emission patterns become strongly asymmetric and drastically change with varying angles of laser polarization. In effect, we can select the emission sites. For instance, there are two emission sites, which are spaced approximately 30nm apart. Even when emission sites are this close together, we can select either one of them.
Understanding the physics behind is the most enjoyable part for us. We performed plasmonic simulations using OpenMaXwell and investigated how a laser wave propagates through a model tip.
Using the resulting optical field distributions on the nano-tip and the field emission theory, we successfully reproduced the asymmetric emission patterns. For more details.
The site-selective technique can control the Young’s interference between two electron beams.
The above site-selective technique can be used to control the Young’s interference. If we have two emission sites, we can replicate the Young’s interference experiments with field emission. In the nano-tip, a coherent electron wave exists and it will be split into two coherent waves when it is emitted into the vacuum. If the two coherent electron waves overlap each other in the vacuum, the interference can be observed.
Using 7fs laser pulses, we can observe the interference patterns between two consecutive emission sites. They appear as streaky patterns indicated by the red arrows. Let’s have a look at the interference patterns of the electron emissions from sites A and B. If the electron emissions occur from both A and B, an interference can be observed. However, if the electron emissions occur from either A or B, no interference can be observed.
The interference was successfully reproduced by simulating the temporal evolution of an electron wave from a nano-tip to the vacuum based on a time-dependent Schrödinger equation. The simulations revealed the underlying physics that drive the interference patterns in the laser-induced field emission. For more details.
The energy spectra of the emitted electrons tell about ultrafast electron dynamics.
We developed a site-selective ultrafast electron source on scales of nanometers and femtoseconds. Such an electron source can be used for time-resolved electron microscopy and for realizing ultrafast devices. In order to use this source for applications, we need to understand its electron dynamics. Under the illumination of weak field laser, femtosecond dynamics can be expected. For instance, we simulated how the population of excited electrons changes while a laser pulse passes through the tip apex. The upper right panel shows the temporal evolution of the electron distribution function.
These femtosecond electron dynamics can be addressed by the energy spectra of the emitted electrons. The energy spectra can be measured by using an electron energy analyzer. We have a hemispherical analyzer with quite good energy resolution.
The measured energy spectra (green dots) change depending on the laser powers and the tip voltages. Using the above theory, we have successfully reproduced the energy spectra (pink curves), which allows us a direct insight into the electron dynamics involved. For more details.
If the laser field is strong, we have additional electron dynamics in the vacuum. Now some of the emitted electrons can be driven back to the surface due to the oscillating laser fields and elastically (or inelastically) re-scattered from the surface.
In order to understand the physics in the strong laser field regime, we performed 3D electron tracking simulations. A 7fs laser pulse induces electron emission. The emitted electrons feel DC fields, AC fields, and coulomb forces from other emitted electrons and image charges in the nano-tip.
Thus the simulated energy spectra successfully reproduced the observed energy spectra in the strong field regime. For more details.
The laser pulses can modify the tip surface.
In even stronger laser fields, thermal effects become prevalent and laser heating modifies the tip surface asymmetrically. For more details.
Dr. Hirofumi Yanagisawa, DFG project leader,
Am Coulombwall 1, 85748 Garching, office: 215
Office Phone: 08928954087 |
8c72b19298ef5995 | Search FQXi
Contests Home
Current Essay Contest
Previous Contests
Undecidability, Uncomputability, and Unpredictability Essay Contest
December 24, 2019 - April 24, 2020
What Is “Fundamental”
October 28, 2017 to January 22, 2018
Wandering Towards a Goal
How can mindless mathematical laws give rise to aims and intention?
December 2, 2016 to March 3, 2017
Contest Partner: The Peter and Patricia Gruber Fund.
Trick or Truth: The Mysterious Connection Between Physics and Mathematics
Media Partner: Scientific American
How Should Humanity Steer the Future?
January 9, 2014 - August 31, 2014
It From Bit or Bit From It
March 25 - June 28, 2013
Questioning the Foundations
Which of Our Basic Physical Assumptions Are Wrong?
May 24 - August 31, 2012
Is Reality Digital or Analog?
November 2010 - February 2011
Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American
What's Ultimately Possible in Physics?
May - October 2009
Contest Partners: Astrid and Bruce McWilliams
The Nature of Time
August - December 2008
Forum Home
Terms of Use
Order posts by:
chronological order
most recent first
RSS feed | RSS help
Russ Otter: on 11/15/11 at 21:47pm UTC, wrote Connections The binding of existence This is a story, built upon...
Mark N. Cowan: on 2/17/11 at 16:08pm UTC, wrote I didn't have time to submit my essay for reality but the approach of...
Rodney Bartlett: on 2/7/11 at 2:56am UTC, wrote According to the Community Ratings, my essay in the 2011 Essay Contest is...
Rodney Bartlett: on 2/2/11 at 3:23am UTC, wrote I know I can't submit another essay. I don't plan to - these are just some...
Rodney Bartlett: on 1/30/11 at 12:55pm UTC, wrote Dear Dr. Carroll, Here's a post that tries to comment on FQXi's 2008 essay...
John: on 4/7/09 at 14:58pm UTC, wrote Well, time does exist. See Horwitz, L.P. (2005) On the significance of a...
Gaetano Barbella: on 3/17/09 at 7:17am UTC, wrote Sono l’autore di un E-Book edito a dicembre scorso dalla Macro Edizioni...
Dr. E (The Real McCoy): on 2/15/09 at 19:00pm UTC, wrote Hello Sean! Hope all is well! I was wondering what your take might be on...
Brian: "From the Nature abstract cited: "There is no theoretical reason to expect..." in Time to Think
Georgina Woodward: "Sorry, what a pigs ear I've made of that attempt to elucidate. Got muddled..." in Answering Mermin’s...
Stefan Weckbach: "John, "An electron is like a 2sphere, there is no cowlick, the hairs on..." in Answering Mermin’s...
Steve Dufourny: "Hi Jonathan, thanks for developing , I am understanding. I consider like..." in Towards the unification...
Jonathan Dickau: "It all fits together Steve... The optimal case for close-packing of..." in Towards the unification...
Steve Dufourny: "it is the meaning of my intuitive equation, E=m(c^2+Xl^2)+ Y with X a..." in The Effects of Inertial...
Steve Dufourny: "What I tell in resume is that for a good explaination of the..." in The Effects of Inertial...
click titles to read articles
Time to Think
Lockdown Lab Life
Is Causality Fundamental?
Building Agency in the Biology Lab
Think Quantum to Build Better AI
October 29, 2020
TOPIC: What if Time Really Exists? by Sean Carroll [refresh]
Bookmark and Share
Login or create account to post reply or comment.
Sean Carroll wrote on Nov. 24, 2008 @ 09:32 GMT
Essay Abstract
Despite the obvious utility of the concept, it has often been argued that time does not exist. I take the opposite perspective: let's imagine that time does exist, and the universe is described by a quantum state obeying ordinary time-dependent quantum mechanics. Reconciling this simple picture with the known facts about our universe turns out to be a non-trivial task, but by taking it seriously we can infer deep facts about the fundamental nature of reality. The arrow of time finds a plausible explanation in a "Heraclitean universe," described by a quantum state eternally evolving in an infinite-dimensional Hilbert space.
Author Bio
Sean Carroll is a Senior Research Associate in theoretical physics at the California Institute of Technology. He obtained his Ph.D. from Harvard University in 1993, and has held positions at MIT, the Institute for Theoretical Physics at UC Santa Barbara, and the University of Chicago. He is the author of Spacetime and Geometry, a graduate-level textbook on general relativity. His research interests include cosmology, field theory, particle physics, general relativity, quantum gravity, quantum mechanics, and thermodynamics.
Download Essay PDF File
Bookmark and Share
John Merryman wrote on Nov. 24, 2008 @ 15:26 GMT
If time is eternal, what would be the consequence of space being infinite?
As a consequence of fluctuation([long link]), space expands, but since the universe would be infinite, this would only cause a form of opposing instability and pressure, resulting in the gravitational collapse and atomic spin of mass. Therefore explaining how order arises from chaos, thus creating low entropy states. Which eventually break down and radiate their energy back out, in a convection cycle of expanding energy and collapsing structure?
Hasn't Complexity Theory shown order does arise from chaos anyway?
Regards from the gallery,
Bookmark and Share
this post has been edited by the forum administrator
Peter Lynds wrote on Nov. 24, 2008 @ 18:24 GMT
Dear Sean,
I realise that there could be an element of wanting to play devil's advocate in your essay, but with all respect, what if God or the aether really exist? As is the case with those two, there is just no physical or logical reason to invoke the existence of time. Moreover, if time did exist, one can show that a Heraclitean universe and change would not be possible. Lastly, in relation to the idea of time being infinite, you seem reluctant to take on board a certain point!
Best wishes
Bookmark and Share
Elliot Tarabour wrote on Nov. 24, 2008 @ 19:53 GMT
I think it's great that somebody is finally standing up for time. I think the line of reasoning that time is illusory is significantly flawed. My belief stems from the fact that I feel that there is a yet to be articulated semi-radical revision of our view of the fundamental nature of reality which incorporates the flow of information as intrinsic in the fabric rather than as a byproduct or adjunct to that nature. As such I think time is a real and critical element in such a formulation.
Bookmark and Share
Peter Lynds wrote on Nov. 24, 2008 @ 21:11 GMT
PS: With my previous comment, I should have probably been more specific in relation to your arguments. For example, if one assumes the existence of time via the Schrodinger equation, through the resulting necessary assumption of the existence of instants in time underlying the equation, it follows that change would be impossible.
Bookmark and Share
FQXi Administrator Anthony Aguirre wrote on Nov. 24, 2008 @ 21:31 GMT
A very nice essay, and I agree with much of what you say in it. A few thoughts:
a) Thank you for tracking down this quote of Eddington, however you did it: it is a great statement of the Boltzmann's brain paradox! I will henceforth steal and employ it at every appropriate opportunity.
b) The conclusion of p. 7 is that the basic sensibility of the world requires the universe to have an infinite dimensional Hilbert space. This is an amazing thing: I can pick out a dimensionality that is *as large as I like*, and instantly rule it out via this argument. Doesn't this bother you? That is, we have a case where a physical infinity is qualitatively and observationally different from any arbitrarily large number. This is either amazing, or something is wrong with the argument (though it is not clear to me what, I only have some hunches.)
c) I'm not sure I would really agree that `baby universe' creation via the 'Recycling universe' mechanism (the reverse Coleman-DeLuccia process) should count as creating low-entropy regions while increasing overall entropy. In fact, I'm fairly convinced that this process is precisely what a downward entropy fluctuation in the thermal system of dS would look like. It's not at all clear that it really helps with the B-Brain problem.
Bookmark and Share
Member Sean Carroll wrote on Nov. 24, 2008 @ 21:51 GMT
Hi Anthony--
a) I have to give credit to Don Page for the Eddington quote. There are also some great collections of original papers by Boltzmann and contemporaries, which are often surprisingly readable.
b) Yes it is remarkable! Which is why I tried to make the assumptions behind the argument as clear as possible. (There is one fuzzy point I didn't have time space to explore in this essay: the connection between the time parameter in the assumed Schrodinger equation and the time we use in our spacetime description of cosmology.)
c) I'm not sure about this myself, I was just trying to keep options open. You might very well be right. Recycling has the advantage of being better understood than Farhi-Guth baby-universe creation, but it's not clear that it really addresses this problem.
Bookmark and Share
report post as inappropriate
Moshe wrote on Nov. 25, 2008 @ 00:46 GMT
Nice essay, I enjoyed reading it. A couple of quick comments:
I see nothing to preclude the possibility that dual time in all of its eternity covers just the period after the big bang. After all we have examples where the boundary time covers only a patch of the dual spacetime (say the case of AdS black holes, where it covers the region outside the horizon). More generally, that dual time is probably not simply related to any clock reading in some semiclassical bulk spacetime.
As for Anthony's question b), this coincides with my prejudice: infinity is not a number, it is a limiting process, and anything which depends on any quantity being strictly infinite should be viewed with suspicion.
Now, if you replace your infinite Hilbert space by a finite one, you'd have recurrences, but by making the Hilbert space larger and larger you'd make them appear later and later. Seems to me that you insert the infinity by demanding that the universe *always* looks like ours for all eternity. We have no evidence for that, and by definition we never will. If we demand that the universe has interesting things going on for the first 15 billion years, or any other finite period of time, we can live with a finite Hilbert space, no?
Bookmark and Share
Anonymous wrote on Nov. 25, 2008 @ 01:00 GMT
I think that the argument that unitarity implies that time must be infinite is *extremely* weak. Unitarity can be stated loosely as "the amount of information at any time [that exists] is the same as the amount of information at any other time [that exists]." Clearly that can be true if time is finite. SC's argument is like saying that the Big Bang [as classically understood] violates the law of conservation of energy, and is therefore incompatible with the Einstein equations. Of course this is wrong. But then the whole argument falls to the ground.
Bookmark and Share
report post as inappropriate
Member Sean Carroll wrote on Nov. 25, 2008 @ 01:25 GMT
Moshe, I agree with the importance of that loophole, as I alluded to in my answer to Anthony's point b). I probably could have made that clear in the essay, but I was feeling the pressure of the word limit. I would personally bet against the possibility that dual time only covers the post-big-bang universe, because I doubt that the whole universe is Robertson-Walker, and that the BB is a boundary stretching through all of space -- but it's certainly a logical possibility.
About the infinity, I think this is a good example of where "infinite" is very different from "really big." For the simple reason that, by hypothesis, time itself is infinite. If time is finite, you can always make the Hilbert space big enough to avoid recurrences/ergodicity; but if time is infinite, you can't, and the argument goes through. If you like, the assumption that time is infinite is where the importance of infinity enters the argument.
anonymous, I don't think it's unitarity that implies time must be infinite, it's the Schrodinger equation. There is nothing about the wave function that would ever stop it from evolving; it's always just a ray in a Hilbert space, all of which are essentially created equal.
Bookmark and Share
report post as inappropriate
Peter Lynds wrote on Nov. 25, 2008 @ 01:38 GMT
Hi Sean,
I find your lack of response to my comment/challenge a little bit unfortunate.
Best wishes
Bookmark and Share
Kaleberg wrote on Nov. 25, 2008 @ 02:28 GMT
On page 4: "and brining to life Friedrich Nietzsche’s image of eternal return": Is that supposed to be "bringing to life", or is the image actually immersed in salt water?
But if you have an infinite dimensional Hilbert space, doesn't that get you all kinds of quantum weirdness? Or, is that what you want?
Bookmark and Share
Greg Egan wrote on Nov. 25, 2008 @ 02:49 GMT
A very interesting essay! Personally I'm not persuaded that our own failure to be Boltzmann Brains tells us anything about the number of observers in the whole history of the universe who *are* Boltzmann Brains (surely that was Hartle and Srednicki's point?) but nonetheless it's an attractive prospect to banish such entities completely.
A few minor typos:
page 2, last sentence of first paragraph:
"by acting the Hamiltonian operator on that state"
page 6, last sentence:
"and brining to life Friedrich Nietzsche’s image of eternal return"
page 8, second-last paragraph (missing reference here?):
"This is a very different scenario from the various forms of eternal cosmologies that feature a low-entropy “bounce” that replaces the Big Bang [?];"
Bookmark and Share
Dr. E (The Real McCoy) wrote on Nov. 25, 2008 @ 02:51 GMT
Hello Sean,
Fun paper and great to see a fan of time here!
"Our conclusion that the Hilbert space of the universe needs to be infinite-dimensional might not seem
very startling; the universe is a big place, why should we be surprised that it requires a big Hilbert space?"
Moving Dimensions Theory can provide an infinite number of dimensions. As the fourth dimension is...
view entire post
Bookmark and Share
this post has been edited by the forum administrator
Hrvoje Nikolic wrote on Nov. 25, 2008 @ 13:04 GMT
Hi Sean,
I've really enjoyed your essay.
However, I have one comment. I think that quantum gravity does not necessarily imply Wheeler-DeWitt equation
H |psi> = 0
For example, even if you do NOT take into account dualities of string theory, it is still true that string theory is a theory of quantum gravity without the Wheeler-DeWitt equation.
Do you agree?
Bookmark and Share
Dr. E (The Real McCoy) wrote on Nov. 25, 2008 @ 14:58 GMT
Hello Hrvoje,
I'm not sure you have noticed, but string theory isn't actually a theory, in the traditional sense, like MDT.
MDT's postulate: The fourth dimension is expanding relative to the three spatial dimension at the rate of c: dx4/dt=ic.
But what are string theory's postulates and equations? Is it not amazing that not even Sean knows string theory's postulates and...
view entire post
Bookmark and Share
Moshe wrote on Nov. 25, 2008 @ 17:25 GMT
Sorry for not reading your reply carefully enough. I'm still confused about the logic though, so with the risk of making the same mistake again: what phenomenological issues prevent us from having recurrences in the asymptotic future? Granted, we haven't seen eggs unscrambling, but maybe that's because we have not been around long enough to sample significantly the Hilbert space.
Bookmark and Share
Member Sean Carroll wrote on Nov. 25, 2008 @ 17:45 GMT
Hrvoje-- Again, I could have been more precise. Even in GR, you don't necessarily get the WdW equation; it depends on your boundary conditions. (And in string theory you can get the equivalent of it.) My only point was that it is a common starting point for many investigations of quantum gravity.
Moshe-- The point is just that most "people like us" will have been around long enough. Given any macrostate you like, including one in which you are absolutely convinced you arose from a low-entropy past with a Big Bang etc, it is extremely likely (in a finite-dim Hilbert space where the state evolves ergodically through a specified torus) that you actually fluctuated out of a higher-entropy past, and that the next observation you do will reveal the thermal equilibrium all around you. All of your memories are completely unreliable, etc.
It's just the Boltzmann Brain argument, but this is a context in which it really works rigorously, not just at a hand-wavy level. If you are evolving eternally in a finite-dim Hilbert space, there is a very well-defined measure on the space of configurations. You have no right to put yourself in a part of the evolution which you deem to be thermodynamically sensible; all you can do is restrict attention to moments in the evolution resembling your macrostate. And the overwhelming majority of those will be thermodynamically crazy.
Bookmark and Share
report post as inappropriate
Moshe wrote on Nov. 25, 2008 @ 18:56 GMT
OK, that's the part I missed, thanks.
Bookmark and Share
F. Le Rouge wrote on Nov. 25, 2008 @ 19:36 GMT
Contrary to yours my French opinion is that Time is worshiped as a God in US culture in law, music, movies, science, economy, more than in the German romantic one if it is possible… The difference with Greek religion is that Chronos is not such a positive God.
Subtle Time even enables US Scientists to build highways with space-time blocks to travel until the Infinity or the Big-Bang. Or to predict the Future from Past informations.
(Just tell me WHO is fighting against Time invasion of Physics here in this forum because I am looking for this person for a while.)
I am the only one here to say that the Travel is in Einstein’s Mind, that the ‘wave’ in Quanta Physics has nothing to do with matter, so let me please defend the idea that Time does not exist that you are caricaturing in your essay.
In a few words:
- Saint Augustine is not the best pleader for ‘Present Time’ but the European Middle-age or Aristotle.
- Time ‘does not exist’ for Aristotle in Matter/material things, but he does not deny its existence in the ‘concepts’ at all (‘Physics’, III-VI). Aristotle’s idea is that one must be careful and not give to material things the ideas’ properties that matter does not have. Eternity, Infinity, the ‘Standard conceptual model of Time’ in other words, made basically with a dot and a line or a circle (including both ideas of Infinity in quantity and in distance/time).
- Parmenide and Eleates in general are not sort of French ‘agents provocateurs’ as you are insinuating; they are not far away from Aristotle’s idea that Time is an accident. Difference is that Aristotle wants to avoid binary language to fight against binary language.
- Your mistake, Sean Carroll, is due to this: Infinity, wave, eternity, dots, circles, arrows, music are still part of the reality which is so ‘made of virtuality’ or ‘potentiality’ for you. And this specific opinion, you do loan it to everybody! (Clinton K. Miller on this forum for instance who is trying to strengthen Time too was surprised that one could have another idea about Time than he does.)
- To sum up: Aristotle, saint Thomas Aquinus or K. Marx, to take famous followers examples are denying the utility of Time-concept for sure, but not arguing that time does not exist in ideas or language.
Bookmark and Share
Narendra Nath wrote on Nov. 26, 2008 @ 13:34 GMT
Dear Author,
Your simple view about the reality of Time and the description of all the known processes to be dealt with using Time-dependent Quantum physics treating the Universe to exist in a unique quantum state, appears to signify that only quantum physics can lead us to the reality. The scientific facts about the Universe known do not conform to such a simplification. What about the birth of the Universe via Big Bang and what existed before, is an enigma still. We all know that there is awareness of the humans that crosses beyond the body senses and scientific instruments. A term 'consciousness' has been admitted as a non-physical entity that covers all the different levels of human awareness. Even famous neurologists have seen the neurons in the Supplementary Motor Area of the brain to become active when no activity was expected from within the body senses. A non-physical covering is considered to surround the SMA, to understand the neuron activity due to external interactions that leave an impression in this covering permanently even after the death of the human concerned. Thus, it appears that the universe and things therein, including the humans need to concern themselves with such an entity 'consciousness' and the same is not open to model on current scientific knowledge.
In my own essay, i have mentioned some aspects that indicate non-constancy of the Physical constants and also possibility of force-field strengths variation with time, in order to understand the Universe from its birth itself. Currently, the tendency to project the Physics evolved in past few hundred years to explain all the observed facts seen about the universe evolution( WMAP data)appears to be insufficient!
Bookmark and Share
Kevembuangga wrote on Nov. 26, 2008 @ 18:03 GMT
I am curious of your opinion about some other weird speculations by Carlos Rodriguez on the "nature" of (space)time and reality: Are We Cruising a Hypothesis Space?
Bookmark and Share
Matthias wrote on Nov. 26, 2008 @ 20:27 GMT
so is he a Boltzmann Brain?
I don't quite see why "we" shouldn't be the fluke among flukes - given infinite time there will be infinite Brains.
Bookmark and Share
George Musser wrote on Nov. 26, 2008 @ 21:18 GMT
Sean, I'm a little confused by how the duality argument bears on the frozen-time problem. In what sense is time really part of the quantum description in the bulk? How do we know that the time parameter in the bulk is not just the time parameter on the boundary? That is to say, do the dynamics truly generate a internal notion of time or are we still just presuming time from the outset?
Does assuming the validity of the Schrödinger equation beg the question (of eternal time)?
P.S. The analogy of two straight lines, with a point of closest approach, is very elegant.
Bookmark and Share
Member Sean Carroll wrote on Nov. 27, 2008 @ 00:29 GMT
George, it's probably not a good idea to think in terms of "bulk" and "boundary" here. I'm not proposing some specific duality between the complicated real-world spacetime and a dual theory; I'm just using the successful examples of duality to motivate the idea that there exists some description of the universe that takes the form of an ordinary time-dependent quantum system. As Moshe points out, there is an additional assumption that "time" in our universe is at least somewhat related (although it might only be approximate) to the time parameter in the quantum-mechanical theory.
The assumption is that the Schrodinger equation is right, but that certainly implies that time goes forever.
Bookmark and Share
report post as inappropriate
Brian Beverly wrote on Nov. 27, 2008 @ 08:09 GMT
I want to thank you for writing a straightforward essay despite having impressive credentials. I have found a correlation, the more impressive the credentials the harder and more intelligent sounding the author made their essay. The less impressive the credentials the more references there are to energy being anything except a conserved number. I'm going to give you one of my restricted votes since you have written a clear and straightforward essay. I agree with you that time does exist; however, I believe your approach can be made more rigorous. I know you have an idea that is probably right but I want your mathematics to lead to deeper insight and not frustration.
When discussing real time try to avoid equations with imaginary numbers. A Hilbert space by definition is an infinite dimensional space. In quantum mechanics a finite number of the terms have non-zero coefficients. It may seem like splitting hairs but it is an important distinction. A Hilbert space is also a mathematical fantasy that helps with calculations it is not physically identical to the universe. Explaining time by only focusing on the time evolution of the wavefunction without including collapse is not possible. Entropy can only be measured after the wavefunction collapses. In equation 5 obtaining an infinite TAUab is only possibly if 2pi is divided by zero. Lastly, infinite eigenvalues with the same E means infinite degeneracy for everything in the universe.
Having written that I do know that you are smarter than me and an FQXI member. This is why I have been hesitant to comment on your essay and other members. I only ask that you please avoid wrath in your reply because we are on the same side.
Bookmark and Share
Narendra Nath wrote on Nov. 27, 2008 @ 15:10 GMT
Dear Sean Carroll,
i am requesting you to see my post of yesterday, Nov., 26. Of course it is certainly your choice to ignore response to the same.
Bookmark and Share
Cristi Stoica wrote on Nov. 28, 2008 @ 21:28 GMT
Dear Sean,
I salute your well written essay defending the reality of time.
I have two questions about the infinite dimensionality of the Hilbert space, which you consider to be required for conciliating the idea of a Universe undergoing unitary evolution with the observed level on entropy.
1) Let us consider a Universe with a finite number of particles at a given time. If a particle evolves influenced only by its interaction with a finite number of particles (all others), wouldn't it stay within a finite dimensional Hilbert subspace of the (possible infinite dimensional) particle's Hilbert space? Therefore, if the system starts with a finite number of interacting particles, the total Hilbert space, as a tensor product of finite dimensional Hilbert spaces, should be finite dimensional.
2) Since the time period increases exponentially with the dimension of the Hilbert space, which, in turn, increases exponentially with the number of particles, isn't it possible that even a finite dimensional Hilbert space be enough?
In this case, the Universe will have a finite period, although very long.
Best wishes,
Cristi Stoica
Bookmark and Share
Dimi Chakalov wrote on Dec. 2, 2008 @ 05:54 GMT
How can you talk about the Heraclitean time (footnote 4), and not address the issue of 'elementary timelike displacement', as created (?) by the so-called dark energy? Five years ago, in your astro-ph/0310342, you were musing on "a problem, a puzzle, and a scandal." Regarding the latter, may I suggest to check out some well known, since 1918, facts here.
Bookmark and Share
J. Smith wrote on Dec. 2, 2008 @ 19:56 GMT
Dear Sean,
it seems that you ignore the discussion about the intrinsic unobservability of the quantity time, and how do clock work.
The point is exacly to recognize that there is no real time meters, because the definition is self-recursive, and to find a way to break such recursivity, or to show how you can do without time. Starting with the assumption that time exists and hoping that further consistency works reminds to the description of the solar system with the epicycles: it works very well but it assumes very wrong principles...
Bookmark and Share
Member Sean Carroll wrote on Dec. 3, 2008 @ 18:08 GMT
Cristi-- Even if it were possible to describe a system with a finite number of particles using a finite-dimensional subspace of an infinite-dimensional Hilbert space, that doesn't necessarily mean that the subspace would be spanned by a finite number of *energy eigenstates*. If it were, then the evolution would be identical to that of a finite-dimensional Hilbert space.
And an exponentially large number is still not good enough -- compared to infinity, even a large number is still small.
Bookmark and Share
report post as inappropriate
Cristi Stoica wrote on Dec. 4, 2008 @ 12:15 GMT
Dear Sean,
You are definitely right, if we represent the states in terms of energy eigenstates, there is a probability of 0.(9) to need an infinite dimensional eigenbasis.
Cristi Stoica
“Flowing with a Frozen River”,
Bookmark and Share
Michael Silberstein wrote on Dec. 6, 2008 @ 00:17 GMT
Dear Sean,
Very clear essay. My concerns are addressed to you and all other wave function (Hilbert space) fundamentalists. I know you want to table this question for the most part and explore a toy QM model but one can't resist asking: whence spacetime? Starting with infinitely dimensional Hilbert space, how are you going to derive spacetime (GR, Lorentz invariance, etc.). Furthermore, how are you going to explain the illusion that we live in a 3D world? My understanding is that those background independent models of QG that do "recover" spacetime either assume a global notion of time or causality (the light-cone structure). In either case, why isn't this cheating? And of course in order for your Heraclitean view to prevail, background independence is essential otherwise you just have a "quantum-block" world of the sort defended by Saunders and other Oxford-Everettians who are likewise wave function fundamentalists. So obviously, the Everett move alone doesn't entail the fundamentality of time and change, on the contrary, the most sophisticated Everettians (on this branch anyway) are block-worlders.
I get that somehow duality and de Sitter space are part of your answer here, but I don't fully follow the logic, how exactly do these two answer my questions? I look forward to your reply.
Bookmark and Share
Tevian Dray wrote on Dec. 8, 2008 @ 06:46 GMT
An elegant argument that infinite time requires an infinite dimensional Hilbert space. This really brings home the difference between "infinite" and "arbitrarily large".
Bookmark and Share
Dr. E (The Real McCoy) wrote on Dec. 8, 2008 @ 18:04 GMT
Hello Sean,
I was hoping for a bit of a dialogue, but too, the lack of dialogue will be useful to historians of science in understanding and characetrizing why our era has seen no progress in theoretical physics, despite unprecedented funding and resources.
Never before have so many been paid so much to advance physics so little. Indeed, future historians will see that overfunding...
view entire post
Bookmark and Share
T H Ray wrote on Dec. 10, 2008 @ 14:02 GMT
Sean, I fully agree with your conclusion of time evolution in an infinite dimensional Hilbert space and the relation to quantum mechanical unitarity.
I want to suggest an alternative, however, to your statement:
"Think of two particles moving on straight lines in an otherwise empty three-dimensional space. No matter how we choose the lines, there will always be some point of closest
approach, while the distance between the particles will grow without bound sufficiently far in the future and
the past."
I think that the distance between the two points is bounded in the past by a 1-dimension information channel, and grows without bound in the future. I realized this in replying to another entrant, Ryan Westafer:
"Suppose one draws a squiggly vertical line to represent a singularity. Curved lines drawn over the top and bottom of the singularity form a convex-lens shape (gravitational lensing). Label the area left of the singularity, "present," and the area to the right of the singularity, "past." If the past is assigned a negative value and the present a positive value, the singularity would be the zero-valued future. The past area is empty; information from the past is channeled along the 1-dimensional edges of the "lens;" the present area is filled with events. An observer from the present cannot look back into the past without staring into the future of the black hole event horizon. Connecting with my own theory:
Because we live in a 10 dimension event space, which as I calculated and explained is identical to the 4-dimension horizon, our only access to the past is in the one-dimensional time parameter. The asymptotic lines trailing to the right where the "lens" closes (but not quite) is the d >= 11, n-dimension Hilbert space. The "emptiness" of the past space is handled analytically in my mathematical model by calculation in the complex plane for reasons that I think should be obvious--the 2-dimensionality of the information channel (the surface of the lens' edge) is a negatively valued space, and the ratio of two negative complex numbers is real and positive."
The reference is to my essay, "Time counts." Note that I agree with Maldacena holography, as you mention, that finds equivalence between a complete theory of quantum gravity in 10 dimensions and quantum field theory in 4 dimensions. I construct from first principles the identity between the 4 dimension horizon and the 10 dimension boundary.
All best,
Bookmark and Share
Dimi Chakalov wrote on Dec. 10, 2008 @ 21:29 GMT
To quote from your essay (p. 9), you take "a very reasonable, if far from unimpeachable, set of assumptions -- a quantum state evolving in time according to the conventional Schrodinger equation with a time-independent Hamiltonian", and set your goal (p. 4) as "it is worth our effort to pursue their ramifications and see where we end up."
I have a simple suggestion. Five years ago, in your arXiv:astro-ph/0310342v2, you were musing on the “smooth tension” of the "dark energy", and acknowledged "a problem, a puzzle, and a scandal".
To clarify what kind of "time" may be implied in the set of assumptions in your recent essay, try to embed the “smooth tension” into some Cauchy surface, as explained in your graduate-level textbook "Spacetime and Geometry".
If you fail, I hope you will have a much better idea of "where we end up" with your essay, and how to fix your problems.
Good luck.
Dimi Chakalov
Bookmark and Share
Dr. E (The Real McCoy) wrote on Dec. 21, 2008 @ 19:04 GMT
Hello Sean,
I think Lee Smolin has some words of wisdom regarding the nature of physical theory, and I was wondering what you might think of them. Smolin's words seem to harken back to those of Galileo/Einstein--the traditional heroes who advanced physics by rugged ingenuity.
In a table inthe attached paper, I present a table which shows how MDT adheres to the more heroic...
view entire post
attachments: Moving_Dimensions_Theory__Heros_Journey_Physics.pdf
Bookmark and Share
amrit wrote on Dec. 23, 2008 @ 16:25 GMT
Dear Sean, yers time exists, time is a "coordinate of motion".
yours amrit
attachments: 3_In_The_Theory_of_Relativity_Time_is_a_Coordinate_of_Motion__Sorli_2009.pdf
Bookmark and Share
Philip Gibbs wrote on Dec. 29, 2008 @ 14:39 GMT
The existance of a photon or any other single harmonic oscillator is enough to prove that hilbert space is infinite dimensional
[a,a*] = 1
Take the trace of either side to show that this cannot have any finite dimensional representations.
Bookmark and Share
Narendra Nath wrote on Jan. 2, 2009 @ 07:48 GMT
i often wonder who can quantify 0 and infinity. To me these two are mere 'elative'nature. The experimental observations (facts) decide when we can take a physical parameter to be 0 or tending to 0. The same is true for 'infinity'. Purely,the significance comes with data and not on purely mathematical considerations. Does it appear proper to say that time can have an infinite range just because time-dependent Schroedinger equation so demands to maintain the sanctity of 'sia'. The physical reality comes only from the product 'sia-sia*'. The very duality of wave/particle nature comes from the necessity of'uncertain' space/time picture. The reality can only be provided through the multiplicity of events and probablistic considerations as individual event can no longer be persued with classical determinacy.
May be i am just repeating background already well known. mathematics of quantum physicshthen explains the significance of multiple events probalistic reality in measurement, in contrast with classical individual event study.
Bookmark and Share
Brian Beverly wrote on Jan. 21, 2009 @ 00:59 GMT
Does it make sense to order imaginary numbers?
Order Axioms:
1) A number can not be less than itself
1) makes sense
2) i < 2i makes sense
3) a bit tricky:
5) This is the key axiom!
Bookmark and Share
Dr. E (The Real McCoy) wrote on Feb. 15, 2009 @ 19:00 GMT
Hello Sean! Hope all is well! I was wondering what your take might be on Lee Smolin's most recent comments-- reflecting his epic change of heart & mind--that time is indeed now real.
What do you make of this?
It is great that Lee is coming around and seeing time as a *physically* real entity. MDT goes a step further in seeing time as a *physically* real entity that emerges...
view entire post
attachments: 2_2_wheeler_recommendation_mcgucken_medium2.jpg, 1_retina2.jpg
Bookmark and Share
Gaetano Barbella wrote on Mar. 17, 2009 @ 07:17 GMT
Sono l’autore di un E-Book edito a dicembre scorso dalla Macro Edizioni dal titolo “I due Leoni Cibernetici” e sottotitolo “L’alfa e l’omega di una matematica ignota, pi greco e la sezione aurea”.
Riporto l’indirizzo informatico che è questo:
Questo E-Book lo presento sul mio sito a questo indirizzo: Il geometra pensiero in rete.
Cordiali saluti,
Gaetano Barbella
Bookmark and Share
John wrote on Apr. 7, 2009 @ 14:58 GMT
Well, time does exist. See
Horwitz, L.P. (2005) On the significance of a recent experiment demonstrating quantum interference in time.
Bookmark and Share
Rodney Bartlett wrote on Jan. 30, 2011 @ 12:55 GMT
Dear Dr. Carroll,
Here's a post that tries to comment on FQXi's 2008 essay contest (The Nature of Time) as well as its 2010 essay contest (Is Reality Digital or Analog?)
We have to wonder if the Large Hadron Collider was worth all the time and money it took to build. It won't find the Higgs boson. It may well "prove" that strings exist but this will only deceive the world because...
view entire post
Bookmark and Share
Rodney Bartlett wrote on Feb. 2, 2011 @ 03:23 GMT
I know I can't submit another essay. I don't plan to - these are just some comments that came to mind after thinking about my essay. They don't seem very relevant to the topic "Is Reality Digital or Analog?" but writing them has given even more satisfaction than writing the essay, and I'm in the mood to share them with the whole world. So if you've got time to read them...
view entire post
Bookmark and Share
Rodney Bartlett wrote on Feb. 7, 2011 @ 02:56 GMT
According to the Community Ratings, my essay in the 2011 Essay Contest is sliding further down the ratings each day. But I'm having more luck with a science journal called General Science Journal - comments of mine inspired by the essay (which are nearly 20,000 words long and include comments about "The Nature of Time" as well as "Is Reality Digital or Analog?") were published in the Journal on Feb. 6 and may be viewed at
Bookmark and Share
Mark N. Cowan wrote on Feb. 17, 2011 @ 16:08 GMT
I didn't have time to submit my essay for reality but the approach of theoretical sociophysics would lay out four layers of time: the changing state of the universe, physics understanding this change as the underlying character of temporal sequence, timing and tempo and then with the emergence of behaviourally modern humans a new kind of causation where humans can frame the changing world around...
view entire post
Bookmark and Share
Russ Otter wrote on Nov. 15, 2011 @ 21:47 GMT
The binding of existence
view entire post
Bookmark and Share
Login or create account to post reply or comment.
Please enter your e-mail address: |
5a578000ad16f344 | Soliton bands in anharmonic quantum lattices
J. C. Eilbeck, H. Gilhøj, Alwyn C. Scott
Research output: Contribution to journalArticle
The number state method is used to calculate binding energies and effective masses from soliton bands for four quantum lattices: (i) the discrete nonlinear Schrödinger equation, (ii) the Ablowitz-Ladik equation (a q-boson model), (iii) a fermionic polaron model, and (iv) the Hubbard model. Results are expressed as functions of the number of freedoms and the ratio of anharmonicity to dispersion, and several physical interpretations are suggested. © 1993.
Original languageEnglish
Pages (from-to)229-235
Number of pages7
JournalPhysics Letters A
Issue number4
Publication statusPublished - 4 Jan 1993
Fingerprint Dive into the research topics of 'Soliton bands in anharmonic quantum lattices'. Together they form a unique fingerprint.
Cite this |
3f98d9404621f0f1 | The Russian Scalar
According to woo researcher Thomas Bearden, the Scalar Interferometer is a powerful superweapon that the Soviet Union used for years to modify weather in the rest of the world. It taps the quantum vacuum energy, using a method discovered by T. Henry Moray in the 1920s. It may have brought down the Columbia spacecraft. However, some conspiracy theorists believe Bearden is an agent of disinformation on this topic.
Bearden was pushing the medical effects of scalar waves as early as 1991. He specifically attributed their powers to cure AIDS, cancer and genetic diseases to their quantum effects and their use in “engineering the Schrödinger equation.” They are also useful in mind control.
Scalar waves appear to have broken out into the woo mainstream around 2005 or 2006, with this text (now widely quoted as the standard explanation) from The Heart of Health; the Principles of Physical Health and Vitality by Stephen Linsteadt, NHD:
When an electric current flows through the wires in opposite directions, the opposing electromagnetic fields from the two wires cancel each other and create a scalar wave. The DNA antenna in our cells’ energy production centers (mitochondria) assumes the shape of what is called a super-coil. Supercoil DNA look like a series of Möbius coils. These Möbius supercoil DNA are hypothetically able to generate scalar waves. Most cells in the body contain thousands of these Möbius supercoils, which are generating scalar waves throughout the cell and throughout the body.
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s |
b3a629de9074692c | Schrödinger cat
For in the end I have relocated to write and in fact, this will be completed if the machine becomes me to hang. Let that maybe it pilláis in the process so excuse me in advance.
Anyway, I wanted to talk about something that many have spoken before and so I doubt me tell you something you do not know already. I’ll talk Schrödinger’s cat.
First of all let’s start with Schrödinger. Erwin Schrödinger was greater than the physical quantum physics contributed by the Schrödinger equation which explains how it evolves a wave function of a particle in time.
And it is “wise” before our friend or intuited that at very small distances the particles behaved as a wave but no one had been able to think about their behavior over time.
Schrödinger is comparable to Einstein in quantum physics and its equation is similar (in scope of time).
Without going into too much to the plate, Schrödinger insert Planck so that it is able to calculate as varies the wave equation in time allowing calculate, for example, an electron which can be “statistically” in time orbiting an atom, something that was previously impossible to know. Including achievement relate your equation depending on the speed of the particle is very fast (relativistic they say) or not. Quite an achievement.
Schrödinger too and as I explained the other day, I try to show people the like quantum physics was not something separate or different to classical physical world. To prove that quantum physics through his equations was nothing more than a singularity of classical physics devised his famous experiment Schrödinger’s cat.
The experiment, as you know, a cat is enclosed in a box with a radioactive particle, a radioactive atom. With them is a Geiger counter that, as you know is capable of measuring a boat radiactividady cyanide. The particle in question has a 50% chance of emitting radioactivity. If the meter detects radioactivity releases cyanide and cat… dies. It’s simple.
And this is where all the grace of the matter is. The phenomenon of disintegration depends on the wave of the radioactive particle, then we know the equation of the wave function of the particle and the Schrödinger equation we know that equation over time.
At first it was thought that quantum phenomena influenced classical physics, were like separate worlds. But this experiment if the particle undergoes disintegration (a quantum phenomenon) affect the cat (not quantum) relating both types of physics. That is the first and most important conclusion.
The second conclusion is that we all know. Quantum wave function is superimposed in two states (and therefore in two states for the cat) which explains how a particle, at these levels, it may be in more than one state at a time, so while in classical physics would only have a state (alive or dead cat, “whole” or disintegrated particle) in quantum physics this in both states at once without any problem. And here it could go on with the Heisenberg uncertainty principle, which leads to the third conclusion, the funniest.
The cat and the particle, while in the box, are in both states, perfect, both quantum states, but really, for an observer can only be in one state because if you open the box or the cat is alive or dead ( unless it is a zombie). What does this mean?.
Honestly, the wave function of a particle does not tell us exactly where this or that speed is much less as it develops over time, despite the Schrödinger equations. What are the odds indicates that there this but we can never know exactly where the mere fact that the observer influences the wave equation changing it every time you try to measure it (cool huh?).
This indicates that the observer is, when does the measurement, which “condenses” (tuned to that word) the wave equation giving a measurement and some data (but then changed) as they spend probability to something concrete.
Well, back to the cat, when in the box in both possibilities and we opened the box condense their quantum states, its wave function on something particular. And because the chances of disintegration are those that are not mean that a moment before or a moment later the cat was in the same state (alive or dead, if I live too) if we had not done our observation, our “condensation”. With what the observer is what makes reality.
What does this mean?. Well, honestly that there may be many realities depending on who measured. The observer or observers of what happens in our lives are / are the ones that condense in a timeline, in a specific time line, there may be other timelines for the wave function and therefore to different realities.
And this to next ?. A basis of the theory of the multiverse in which they are merely different time lines of the quantum wave function of what surrounds us. The collapse and condensation of observation leads to a concrete reality that can be (or not) the same as that could have happened to someone else. Something that leads to a very complicated issue condensation waves by different observers and these physical quantities, a mathematical topic very interesting and much more philosophically that I leave for what you think.
An example: can I condense the wave equations in a different way to someone else when this is not present? Therefore we have different time lines? And when we come together and observe the same process we collapsed the same timeline? And in that case, as two different time lines in a single same for two observers they collapse? What if the problem is for more than two observers?…
All this gives a (if complicated) very complicated mathematics that now are developing and I, particularly, it costs me a lot to understand with all the tequila from the world over.
|
08d13f68c4fc598a | APMAdvances in Pure Mathematics2160-0368Scientific Research Publishing10.4236/apm.2016.61005APM-62902ArticlesPhysics&Mathematics Inverse Spectral Theory for a Singular Sturm Liouville Operator with Coulomb Potential tibarS. Panakhov1*IsmailUlusoy2*Department of Mathematics, Adiyaman University, Adiyaman, TurkeyDepartment of Mathematics, Firat University, Elazig, Turkey* E-mail:epenahov@firat.edu.tr(TSP);iulusoy@adiyaman.edu.tr(IU);150120160601414921 September 2015accepted 18 January 21 January 2016© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/
We consider the inverse spectral problem for a singular Sturm-Liouville operator with Coulomb potential. In this paper, we give an asymptotic formula and some properties for this problem by using methods of Trubowitz and Poschel.
Coulomb Potential Asymptotic Formula Normalizing Eigenfunction
1. Introduction
The Sturm-Liouville equation is a second order linear ordinary differential equation of the form
for some and. It was first introduced in an 1837 publication [1] by the eminent French mathematicians Joseph Liouville and Jacques Charles François Sturm. The Sturm-Liouville Equation (1.1) can easily be reduced to form
If we assume that p(x) has a continuous first derivative, and p(x), r(x) have a continuous second derivative, then by means of the substitutions
where c is given by
Equation (1.1) assumes the form (1.2) replaced by; where
The transformation of the general second order equation to canonical form and the asymptotic formulas for the eigenvalues and eigenfunctions was given by Liouville. A deep study of the distribution of the zeros of eigenfunctions was done by Sturm. Firstly, the formula for the distribution of the eigenvalues of the single dimensional Sturm operator defined in the whole of the straight-line axis with increasing potential in the infinity was given by Titchmarsh in 1946 [2] [3] . Titchmarsh also showed the distribution formula for the Schrödinger Operator. In later years, Levitan improved the Titchmarsh’s method and found important asymptotic formula for the eigenvalues of different differential operators [4] [5] . Sturm-Liouville problems with a singularity at zero have various versions. The best known case is the one studied by Amirov [6] [7] , in which the potential has a Coulomb-type singularity
at the origin. In these works, properties of spectral characteristic were studied for Sturm-Liouville operators with Coulomb potential, which have discontinuity conditions inside a finite interval. Panakhov and Sat estimated nodal points and nodal lengths for the Sturm-Liouville operators with Coulomb potential [8] -[10] . Basand Metin defined a fractional singular Sturm-Liouville operator having Coulomb potential of type A/x [11] .
Let’s give some fundamental physical properties of the Sturm-Liouville operator with Coulomb potential. Learning about the motion of electrons moving under the Coulomb potential is of significance in quantum theory. Solving these types of problems provides us with finding energy levels of not only hydrogen atom but
also single valance electron atoms such as sodium. For the Coulomb potential is given by, where r
is the radius of the nucleus, e is electronic charge. According to this, we use time-dependent Schrödinger equation
where is the wave function, h is Planck’s constant and m is the mass of electron.
In this equation, if the Fourier transform is applied
it will convert to energy equation dependent on the situation as follows:
Therefore, energy equation in the field with Coulomb potential becomes
If this hydrogen atom is substituted to other potential area, then energy equation becomes
If we make the necessary transformation, then we can get a Sturm-Liouville equation with Coulomb potential
where is a parameter which corresponds to the energy [12] .
Our aim here is to find asymptotic formulas for singular Sturm-Liouville operatör with Coulomb potential with domain
Also, we give the normalizing eigenfunctions and spectral functions.
2. Basic Properties
We consider the singular Sturm-Liouville problem
where the function. Let us denote by the solution of (2.1) satisfying the initial condition
and by the solution of same equation, satisfying the initial condition
Lemma 1. The solution of problem (2.1) and (2.2) has the following form:
Proof. Since satisfies Equation (2.1), we have
Integrating the first integral on the right side by parts twice and taking the conditions (2.2) into account, we find that
which is (2.4).
Lemma 2. The solution of problem (2.1) and (2.3) has the following form:
Proof. The proof is the same as that of Lemma 1.
Now we give some estimates of and which will be used later. For each fixed x in [0, 1] the map is an entire function on which is real-valued on [13] . Using the estimate
we get
Since and
we have
From (2.6) the inequality is easily checked
where c is uniform with respect to q on bounded sets in.
Lemma 3 (Counting Lemma). [13] Let and be an integer. Then has exactly N roots, counted with multiplicities, in the open half plane
and for each, exactly one simple root in the egg shaped region
There are no other roots.
From this Lemma there exists an integer N such that for every there is only one eigenvalue in Thus for every
can be chosen independent of q on bounded sets of. Following theorem [13] shows that the eigenvalues are the zeroes of the map and these zeroes are simple.
Theorem 1. If is Dirichlet eigenvalue of q in, then
In particular,. Thus, all roots of are simple.
Proof. The proof is similar as that of ([13] , Pöschel and Trubowitz).
3. Asymptotic Formula
We need the following lemma for proving the main result.
Lemma 4. For every f in,
Proof. Firstly, we shall prove the relation (3.1)
By the Cauchy-Schwarz inequality, we get
Since f is in, the last two integrals are equal to
So (3.3) is equivalent to
Finally, we shall prove the relation (3.2)
This proves the lemma.
The main result of this article is the following theorem:
Theorem 2. For,
Proof of the Main Theorem. Since it must be. Because is a nontrivial solution of Equation (2.1) satisfying Dirichlet boundary conditions, we have
From (2.7) someone gets the inequality
From (3.5) integral in the equation of (3.4) takes the form
By using difference formulas for sine we have
From Lemma 4 we get
Thus, by using this inequality (3.4) can be written in the form
From (2.8) we conclude that
Since and, (3.7) is equivalent to
So we get
From (2.8) we have
In this case, the theorem is proved.
From this theorem, the map
from q to its sequences of Dirichlet eigenvalues sends into S. Later, we need this map to characterize spectra which is equivalent to determining the image of.
4. Inverse Spectral Theory
To each eigenvalue we associate a unique eigenfunction normalized by
Let’s define the normalizing eigenfunction:
Lemma 5. For,
This estimate holds uniformly on bounded subsets of.
Proof. Let and. By the basic estimate for,
By using this estimate we have
So we get
Thus we conclude that
Dividing by we get
Also, we need to have asymptotic estimates of the squares of the eigenfunctions and products
Lemma 6. For,
This estimate holds uniformly on bounded subsets of.
Proof. We know that
By the basic estimate for, we have
The map is real analytic on. Now we give asymptotic behavior for.
Theorem 3. Each is a compact, real analytic function on with
Its gradient is
The error terms are uniform on bounded subsets of.
Proof. From [14] we have
So we calculate the integral
Finally, since, we get
By the Cauchy-Schwarz inequality, we prove the theorem.
Formula (4.3) shows that belongs to. By Theorem 3, the map
from q to its sequences of -values maps into the. So we obtain a map
from into the.
Theorem 4. [13] is one-to-one on.
Let be the Frechet derivative of the map at q.
Theorem 5. [14] is an isomorphism from onto.
Cite this paper
Etibar S.Panakhov,IsmailUlusoy, (2016) Inverse Spectral Theory for a Singular Sturm Liouville Operator with Coulomb Potential. Advances in Pure Mathematics,06,41-49. doi: 10.4236/apm.2016.61005
ReferencesSturm, C. and Liouville, J. (1837) Extrait d.un m emoire sur le d eveloppement des fonctions en series dont les di erents terms sont assujettis a satisfaire a une m eme equation di er entielle lin eaire, contenant un param etre variable. Journal de Math ematiques Pures et Appliqu ees. Journal de Mathématiques Pures et Appliquées, 2, 220-233.Birkhoff, G.D. (1908) Boundary Value and Expansion Problems of Ordinary Linear Differential Equations. Transactions of the American Mathematical Society, 9, 219-231. http://dx.doi.org/10.1090/S0002-9947-1908-1500810-1Titchmarsh, E.C. (1946) Eigenfunction Expansions Associated with Second-Order Differential Equations. Vol. 1, Clarendon Press, Oxford.Titchmarsh, E.C. (1958) Eigenfunction Expansions Associated with Second-Order Differential Equations. Vol. 2, Clarendon Press, Oxford.Levitan, B.M. (1978) On the Determination of the Sturm-Liouville Operator from One and Two Spectra. Mathematics of the USSR Izvestija, 12, 179-193. http://dx.doi.org/10.1070/IM1978v012n01ABEH001844Amirov, R.Kh. (1985) Inverse Problem for the Sturm-Liouville Equation with Coulomb Singularity Its Spectra. Kand. Dissertasiya, Baku.Topsakal, N. and Amirov, R. (2010) Inverse Problem for Sturm-Liouville Operators with Coulomb Potential Which Have Discontinuity Conditions inside an Interval. Mathematical Physics, Analysis and Geometry, 13, 29-46.http://dx.doi.org/10.1007/s11040-009-9066-ySat, M. and Panakhov, E.S. (2012) Inverse Nodal Problem for Sturm-Liouville Operators with Coulomb Potential. International Journal of Pure and Applied Mathematics, 80, 173-180.Sat, M. and Panakhov, E.S. (2013) Reconstruction of Potential Function for Sturm-Liouville Operator with Coulomb Potential. Boundary Value Problems, 2013, Article 49.Sat, M. (2014) Half Inverse Problem for the Sturm-Liouville Operator with Coulomb Potential. Applied Mathematics and Information Sciences, 8, 501-504. http://dx.doi.org/10.12785/amis/080207Bas, E. and Metin, F. (2013) Fractional Singular Sturm-Liouville Operator for Coulomb Potential. Advances in Difference Equations, Article ID: 300. http://dx.doi.org/10.1186/1687-1847-2013-300Blohincev, D.I. (1949) Foundations of Quantum Mechanics. GITTL, Moscow.Poeschel, J. and Trubowitz, E. (1987) Inverse Spectral Theory. Academic Press, San Diego.Guillot, J.-C. and Ralston, J.V. (1988) Inverse Spectral Theory for a Singular Sturm-Liouville Operatör on [0,1]. Journal of Differential Equations, 76, 353-373. http://dx.doi.org/10.1016/0022-0396(88)90080-0 |
a5f5437b2fc007da | Dismiss Notice
Join Physics Forums Today!
Quantum SHO Wave Functions not Complex?
1. Aug 15, 2010 #1
User Avatar
Gold Member
The Hermite Polynomials are solutions to the Schrödinger equation for the Quantum Simple Harmonic Oscillator. But the Hermite Polynomials are real, not complex. I thought that solutions to the Schrödinger equation always had to be complex. What am I not understanding? Thanks in advance.
2. jcsd
3. Aug 15, 2010 #2
User Avatar
Staff: Mentor
You're missing the time-dependent phase factor. Energy eigenstates generally look like this, in their time-dependent form:
[tex]\Psi_n(x,t) = \psi_n(x) e^{-iE_n t / \hbar}[/tex] |
2b6f580c81dd3430 | The Perverse Economics of Climate Modeling
Story Stream
recent articles
As the dust continues swirling around emails purloined from the climate research unit at the University of East Anglia, global warming activists keep insisting that computer climate models predicting disaster represent "settled science." How can that be when climate models aren't science at all?
Laws are science. Models are engineering.
Scientists conduct controlled experiments, collect observable data, and construct testable hypotheses. In this case, they compare and discuss the accuracy of various sets of temperature measurements, ice core drillings, or tree ring observations. The peer review process, when properly administered, helps establish a body of accepted facts that both scientists and engineers can work from.
Given the credible allegations that the peer review process may have been corrupted and that some of the data has been cherry picked, it's in everyone's best interest to have a thorough public review of the quality and comprehensiveness of the data collected. This should give all of us a higher degree of confidence about when, where, and how much the earth's climate has actually changed. Such a review can be done without taking a position on potential causes of climate change or proposed solutions. After the data are scrubbed, it might be correct to claim that climate measurement science is "settled."
That's step one.
When testable, repeatable relationships are observed on a set of data that can be expressed in the language of mathematics, scientists often say that they have discovered a new "law." Examples run from the straightforward F=MA and powerful E=MC2 to the numbingly complex Schrödinger equations.
If anyone can articulate a mathematical law that describes the relationships amongst the climate data collected thus far, let them step forward and publish it. Not just the predictions - show us the equation so scientists everywhere can test it to see how well it matches the "settled" climate measurement data.
This has not yet happened.
That doesn't mean we're helpless. It just means that it's time for the engineers to step in. Developing empirical computer models to forecast the behavior of complex, nonlinear, stochastic, and even chaotic systems is what engineers do all the time. Without such models we couldn't build computer chips, airplanes, inventory optimization systems, skyscrapers, cell phones, financial derivatives, and many other things that constitute our man-made world. We do this even if the science is not entirely understood.
Most of these models have to deal with incomplete knowledge and limited testability, and many have to accommodate high degrees of randomness and uncertainty. Models are never "settled" but through trial and error driven by profit and loss they keep getting better. While planes don't fall from the sky as often as they used to and computer chips can be manufactured at very high yield, you might have noticed that many financial derivatives models didn't do such a great job predicting the future. Why is that?
The nuance of modeling is that while scientists ask the objective question "is this true," engineers ask the subjective question "does this solve my problem"?
We know what problem the wizards of Wall Street were trying to solve. "How do I develop derivatives models that maximize my year-end bonus?" That approach delivered fantastic bonuses right up until it didn't, at which point the entire world economy was driven into a ditch.
And climate modelers? Scientists living on the public weal get compensated via a mix of government grants and scientific prestige. Their problem they are trying to solve is "How do I get my papers published in the most prestigious journals so I can maximize the size of my next grant?"
If you've ever done computer modeling you know that there are a thousand ways to make a set of curves fit retrospective data in underconstrained systems. And right now, both the government grant and scientific prestige markets are dishing out significant rewards for models that predict runaway climate disaster. So which curves do you think savvy modelers are going to pick?
Calling these climate models "science" and then having the audacity to call them "settled" is the same kind of Big Lie that allowed Congress to assure us that Freddie Mac and Fannie Mae were completely safe so we had nothing to worry about when they embarked on an orgy of sub-prime lending. Every computer model predicting the future behavior of the mortgage market fit all the historical data - right up until the moment that they didn't.
Before we let suspect computer models developed by a handful of people drive the entire world economy into a ditch, don't you think we should take the covers off and invest a little more time and effort to thoroughly examine how these models work? Hopefully this will include analytical critiques from a wider cast of characters than the self-serving cabal whose mendacity and ham-handed attempts to marginalize dissent were recently exposed. Perhaps an open process will help both "sides" focus on attacking weaknesses in the models themselves rather than attacking each other's tribal affiliations. Only then can we hope to get real value for all the money we taxpayers fork over to support these scientists.
Show commentsHide Comments
Related Articles |
2dc1e62ffc2a3ce5 | Create and share a new lesson based on this one.
About TED-Ed Originals
Meet The Creators
• Educator Chad Orzel
• Director Agota Vegso
Additional Resources for you to Explore
Here’s are more TED-Ed Lessons by the same educator: Particles and waves: The central mystery of quantum mechanics and What is the Heisenberg Uncertainty Principle?
Schrödinger’s Cat is a very fertile subject for discussion, and has also been discussed in this lesson from Josh Samani. Here’s more about the thought experiment described briefly by Minute Physics. Go to the Sixty Symbols video and learn much more detail about Schrodinger’s Cat. For a humorous look at this cat experiment, venture to this site for a simulation.
Erwin Schrödinger shared the 1933 Nobel Prize in Physics with Paul Dirac for his discovery of the equation that governs the behavior of quantum particles. Solving the Schrödinger equation is a central part of quantum mechanics education. Numerous online applets let you look at how this process works, including this one showing how quantum objects can pass through obstacles. Then, play with this one showing how the wave equation gives rise to discrete allowed states as in the Bohr model.
Schrödinger had wide-ranging interests in science and philosophy, and delivered a famous lecture on the physics of biology at Trinity College in 1943. This lecture was turned into a book, What Is Life?. This book is credited with inspiring a number of young scientists to study biology and genetics, including Maurice Wilkins, Francis Crick, and James Watson, who later shared the 1962 Nobel Prize in Medicine for the discovery of the structure of DNA, using data obtained by biophysicist Rosalind Franklin. What was it about this book that was so inspirational? Read this article Writing that inspired a generation of scientists and find out. Trinity College runs an annual lecture series named after Schrödinger in honor of this famous talk.
One of the issues associated with Schrödinger’s cat thought experiment is exactly how an experiment arrives at the single final state that we observe. This question is central to the interpretation of quantum mechanics, with the dominant approach in Schrödinger’s day being the “Copenhagen Interpretation” developed at Niels Bohr’s institute in Denmark. This approach insists on an absolute division between microscopic scales (where objects like electrons behave quantum-mechanically) and macroscopic scales (where objects like cats have definite states), and a “collapse” of the wave function into a definite state at the instant of measurement. One of the chief alternatives is the “Many-Worlds Interpretation” introduced by Hugh Everett in 1957, which holds that all possible measurement results are observed, but in separate branches of the wave function of the universe, which effectively function as separate and inaccessible “universes.” Here are some blog-based discussions of the issues surrounding Many-Worlds. The Many-Worlds Interpretation is also a rich source of inspiration for fiction. Read one here: “Divided by Infinity” by Robert Charles Wilson.
The double-slit experiment is very famous, and has been completed by many scientists. These include physicists at Hitachi who provide photos and video on the web. The importance of superposition for interference experiments can be demonstrated by a “quantum eraser” experiment, in which you can arrange to “tag” the particle showing which slit it went through. This tagging destroys the interference pattern, but the pattern shows up again if you “erase” the which-slit information. Interested in learning more? Scientific American has a guide to making your own quantum eraser for light.
The sharing of electrons between two atoms, which leads to the formation of covalent bonds in molecules, is simulated for a simple system here. As you add more atoms, the situation becomes more complicated, and the states of the system begin to evolve toward broad “bands” of allowed energy, as simulated here. The “band structure” of materials determines their electrical properties, and calculating and measuring band structure is one of the fundamental problems of condensed matter physics. This brief introduction gives a sense of the issues involved.
Finally, superposition states like those in Schrödinger’s thought experiment are not only crucial for modern silicon-based computers, but may be the key to future quantum computers of unparalleled power. Unlike a classical computer whose bits can only have values of “0” or “1,” the “qubits” in a quantum computer can be in a superposition of both “0” and “1” at the same time. This enables quantum computers to solve certain types of problems faster than any classical computer, and has made research into quantum computing a large and active field. A short video on quantum computing research animated by Jorge Cham of Ph.D. comics is here: A more detailed introduction is available from the Institute for Quantum Computing at the University of Waterloo.
Find more by this educator/author of How to Teach Physics to Your Dog at this site. |
6c18789c7d6a70b8 | Spin–statistics theorem
From Wikipedia, the free encyclopedia
(Redirected from Spin-statistics theorem)
Jump to: navigation, search
In quantum mechanics, the spin–statistics theorem relates the spin of a particle to the particle statistics it obeys. The spin of a particle is its intrinsic angular momentum (that is, the contribution to the total angular momentum that is not due to the orbital motion of the particle). All particles have either integer spin or half-integer spin (in units of the reduced Planck constant ħ).[1][2]
The theorem states that:
• The wave function of a system of identical integer-spin particles has the same value when the positions of any two particles are swapped. Particles with wave functions symmetric under exchange are called bosons.
• The wave function of a system of identical half-integer spin particles changes sign when two particles are swapped. Particles with wave functions antisymmetric under exchange are called fermions.
In other words, the spin–statistics theorem states that integer-spin particles are bosons, while half-integer–spin particles are fermions.
The spin–statistics relation was first formulated in 1939 by Markus Fierz[3] and was rederived in a more systematic way by Wolfgang Pauli.[4] Fierz and Pauli argued their result by enumerating all free field theories subject to the requirement that there be quadratic forms for locally commuting[clarification needed] observables including a positive-definite energy density. A more conceptual argument was provided by Julian Schwinger in 1950. Richard Feynman gave a demonstration by demanding unitarity for scattering as an external potential is varied,[5] which when translated to field language is a condition on the quadratic operator that couples to the potential.[6]
General discussion[edit]
In a given system, two indistinguishable particles, occupying two separate points, have only one state, not two. This means that if we exchange the positions of the particles, we do not get a new state, but rather the same physical state. In fact, one cannot tell which particle is in which position.
A physical state is described by a wavefunction, or – more generally – by a vector, which is also called a "state"; if interactions with other particles are ignored, then two different wavefunctions are physically equivalent if their absolute value is equal. So, while the physical state does not change under the exchange of the particles' positions, the wavefunction may get a minus sign.
Bosons are particles whose wavefunction is symmetric under such an exchange, so if we swap the particles the wavefunction does not change. Fermions are particles whose wavefunction is antisymmetric, so under such a swap the wavefunction gets a minus sign, meaning that the amplitude for two identical fermions to occupy the same state must be zero. This is the Pauli exclusion principle: two identical fermions cannot occupy the same state. This rule does not hold for bosons.
In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum. In order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. The operator
(with an operator and a numerical function) creates a two-particle state with wavefunction , and depending on the commutation properties of the fields, either only the antisymmetric parts or the symmetric parts matter.
Let us assume that and the two operators take place at the same time; more generally, they may have spacelike separation, as is explained hereafter.
If the fields commute, meaning that the following holds:
then only the symmetric part of contributes, so that , and the field will create bosonic particles.
On the other hand, if the fields anti-commute, meaning that has the property that
then only the antisymmetric part of contributes, so that , and the particles will be fermionic.
Naively, neither has anything to do with the spin, which determines the rotation properties of the particles, not the exchange properties.
A suggestive bogus argument[edit]
Consider the two-field operator product
where R is the matrix that rotates the spin polarization of the field by 180 degrees when one does a 180-degree rotation around some particular axis. The components of are not shown in this notation, has many components, and the matrix R mixes them up with one another.
In a non-relativistic theory, this product can be interpreted as annihilating two particles at positions and with polarizations that are rotated by relative to each other. Now rotate this configuration by around the origin. Under this rotation, the two points and switch places, and the two field polarizations are additionally rotated by a . So we get
which for integer spin is equal to
and for half-integer spin is equal to
(proved here). Both the operators still annihilate two particles at and . Hence we claim to have shown that, with respect to particle states:
So exchanging the order of two appropriately polarized operator insertions into the vacuum can be done by a rotation, at the cost of a sign in the half-integer case.
This argument by itself does not prove anything like the spin–statistics relation. To see why, consider a nonrelativistic spin-0 field described by a free Schrödinger equation. Such a field can be anticommuting or commuting. To see where it fails, consider that a nonrelativistic spin-0 field has no polarization, so that the product above is simply:
In the nonrelativistic theory, this product annihilates two particles at and , and has zero expectation value in any state. In order to have a nonzero matrix element, this operator product must be between states with two more particles on the right than on the left:
Performing the rotation, all that we learn is that rotating the 2-particle state gives the same sign as changing the operator order. This gives no additional information, so this argument does not prove anything.
Why the bogus argument fails[edit]
To prove spin–statistics theorem, it is necessary to use relativity, as is obvious from the consistency of the nonrelativistic spinless fermion, and the nonrelativistic spinning bosons. There are claims in the literature of proofs of spin–statistics theorem that do not require relativity,[7][8] but they are not proofs of a theorem, as the counterexamples show, rather they are arguments for why spin–statistics is "natural", while wrong-statistics[clarification needed] is "unnatural". In relativity, the connection is required.
In relativity, there are no local fields that are pure creation operators or annihilation operators. Every local field both creates particles and annihilates the corresponding antiparticle. This means that in relativity, the product of the free real spin-0 field has a nonzero vacuum expectation value, because in addition to creating particles which are not annihilated and annihilating particles which are not subsequently created, it also includes a part that creates and annihilates "virtual" particles whose existence enters into interaction calculations - but never as scattering matrix indices or asymptotic states.
And now the heuristic argument can be used to see that is equal to , which tells us that the fields cannot be anti-commuting.
The essential ingredient in proving the spin/statistics relation is relativity, that the physical laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, by definition.
Additionally, the assumption (known as microcausality) that spacelike separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless. However, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained.
Lorentz transformations include 3-dimensional rotations as well as boosts. A boost transfers to a frame of reference with a different velocity, and is mathematically like a rotation into time. By analytic continuation of the correlation functions of a quantum field theory, the time coordinate may become imaginary, and then boosts become rotations. The new "spacetime" has only spatial directions and is termed Euclidean.
A π rotation in the Euclidean xt plane can be used to rotate vacuum expectation values of the field product of the previous section. The time rotation turns the argument of the previous section into the spin–statistics theorem.
The proof requires the following assumptions:
1. The theory has a Lorentz-invariant Lagrangian.
2. The vacuum is Lorentz-invariant.
3. The particle is a localized excitation. Microscopically, it is not attached to a string or domain wall.
4. The particle is propagating, meaning that it has a finite, not infinite, mass.
5. The particle is a real excitation, meaning that states containing this particle have a positive-definite norm.
These assumptions are for the most part necessary, as the following examples show:
1. The spinless anticommuting field shows that spinless fermions are nonrelativistically consistent. Likewise, the theory of a spinor commuting field shows that spinning bosons are too.
2. This assumption may be weakened.
3. In 2+1 dimensions, sources for the Chern–Simons theory can have exotic spins, despite the fact that the three-dimensional rotation group has only integer and half-integer spin representations.
4. An ultralocal field can have either statistics independently of its spin. This is related to Lorentz invariance, since an infinitely massive particle is always nonrelativistic, and the spin decouples from the dynamics. Although colored quarks are attached to a QCD string and have infinite mass, the spin-statistics relation for quarks can be proved in the short distance limit.
5. Gauge ghosts are spinless fermions, but they include states of negative norm.
Assumptions 1 and 2 imply that the theory is described by a path integral, and assumption 3 implies that there is a local field which creates the particle.
The rotation plane includes time, and a rotation in a plane involving time in the Euclidean theory defines a CPT transformation in the Minkowski theory. If the theory is described by a path integral, a CPT transformation takes states to their conjugates, so that the correlation function
must be positive definite at x=0 by assumption 5, the particle states have positive norm. The assumption of finite mass implies that this correlation function is nonzero for x spacelike. Lorentz invariance now allows the fields to be rotated inside the correlation function in the manner of the argument of the previous section:
Where the sign depends on the spin, as before. The CPT invariance, or Euclidean rotational invariance, of the correlation function guarantees that this is equal to G(x). So
for integer spin fields and
for half-integer spin fields.
Since the operators are spacelike separated, a different order can only create states that differ by a phase. The argument fixes the phase to be −1 or 1 according to the spin. Since it is possible to rotate the space-like separated polarizations independently by local perturbations, the phase should not depend on the polarization in appropriately chosen field coordinates.
This argument is due to Julian Schwinger.[9]
The spin-statistics theorem implies that half-integer spin particles are subject to the Pauli exclusion principle, while integer-spin particles are not. Only one fermion can occupy a given quantum state at any time, while the number of bosons that can occupy a quantum state is not restricted. The basic building blocks of matter such as protons, neutrons, and electrons are fermions. Particles such as the photon, which mediate forces between matter particles, are bosons.
There are a couple of interesting phenomena arising from the two types of statistics. The Bose–Einstein distribution which describes bosons leads to Bose–Einstein condensation. Below a certain temperature, most of the particles in a bosonic system will occupy the ground state (the state of lowest energy). Unusual properties such as superfluidity can result. The Fermi–Dirac distribution describing fermions also leads to interesting properties. Since only one fermion can occupy a given quantum state, the lowest single-particle energy level for spin-1/2 fermions contains at most two particles, with the spins of the particles oppositely aligned. Thus, even at absolute zero, the system still has a significant amount of energy. As a result, a fermionic system exerts an outward pressure. Even at non-zero temperatures, such a pressure can exist. This degeneracy pressure is responsible for keeping certain massive stars from collapsing due to gravity. See white dwarf, neutron star, and black hole.
Ghost fields do not obey the spin-statistics relation. See Klein transformation on how to patch up a loophole in the theorem.
Relation to representation theory of the Lorentz group[edit]
The Lorentz group has no non-trivial unitary representations of finite dimension. Thus it seems impossible to construct a Hilbert space in which all states have finite, non-zero spin and positive, Lorentz-invariant norm. This problem is overcome in different ways depending on particle spin-statistics.
For a state of integer spin the negative norm states (known as "unphysical polarization") are set to zero, which makes the use of gauge symmetry necessary.
For a state of half-integer spin the argument can be circumvented by having fermionic statistics.[10]
• Markus Fierz: Über die relativistische Theorie kräftefreier Teilchen mit beliebigem Spin. Helv. Phys. Acta 12, 3–17 (1939)
• Wolfgang Pauli: The connection between spin and statistics. Phys. Rev. 58, 716–722 (1940)
• Ray F. Streater and Arthur S. Wightman: PCT, Spin & Statistics, and All That. 5th edition: Princeton University Press, Princeton (2000)
• Ian Duck and Ennackel Chandy George Sudarshan: Pauli and the Spin-Statistics Theorem. World Scientific, Singapore (1997)
• Arthur S Wightman: Pauli and the Spin-Statistics Theorem (book review). Am. J. Phys. 67 (8), 742–746 (1999)
• Arthur Jabs: Connecting spin and statistics in quantum mechanics. http://arXiv.org/abs/0810.2399 (2014) (Found. Phys. 40, 776–792, 793–794 (2010))
1. ^ Dirac, Paul Adrien Maurice (1981-01-01). The Principles of Quantum Mechanics. Clarendon Press. p. 149. ISBN 9780198520115.
2. ^ Pauli, Wolfgang (1980-01-01). General principles of quantum mechanics. Springer-Verlag. ISBN 9783540098423.
3. ^ Markus Fierz (1939). "Über die relativistische Theorie kräftefreier Teilchen mit beliebigem Spin". Helvetica Physica Acta. 12 (1): 3–37. doi:10.5169/seals-110930.
4. ^ Wolfgang Pauli (15 October 1940). "The Connection Between Spin and Statistics" (PDF). Physical Review. 58 (8): 716–722. Bibcode:1940PhRv...58..716P. doi:10.1103/PhysRev.58.716.
5. ^ Richard Feynman (1961). Quantum Electrodynamics. Basic Books. ISBN 978-0-201-36075-2.
6. ^ Wolfgang Pauli (1950). "On the Connection Between Spin and Statistics". Progress of Theoretical Physics. 5 (4): 526–543. doi:10.1143/ptp/5.4.526.
7. ^ Jabs, Arthur (5 April 2002). "Connecting Spin and Statistics in Quantum Mechanics". Foundations of Physics. Foundations of Physics. 40 (7): 776–792. arXiv:0810.2399Freely accessible. Bibcode:2010FoPh...40..776J. doi:10.1007/s10701-009-9351-4. Retrieved May 29, 2011.
8. ^ Horowitz, Joshua (14 April 2009). "From Path Integrals to Fractional Quantum Statistics" (PDF).
9. ^ Julian Schwinger (June 15, 1951). "The Quantum Theory of Fields I". Physical Review. 82 (6): 914–917. Bibcode:1951PhRv...82..914S. doi:10.1103/PhysRev.82.914. . The only difference between the argument in this paper and the argument presented here is that the operator "R" in Schwinger's paper is a pure time reversal, instead of a CPT operation, but this is the same for CP invariant free field theories which were all that Schwinger considered.
10. ^ Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory. Addison-Wesley. ISBN 0-201-50397-2.
See also[edit]
External links[edit] |
4d667d19eeac2b3c | The Future of Models
Guest essay by Nancy Green
At the close of the 19th century physics was settled science. The major questions had been answered and what remained was considered window dressing. Our place in the universe was known:
We came from the past and were heading to the future. On the basis of Physical Laws, by knowing the Past one could accurately predict the Future.
This was the Clockwork Universe of the Victorian Era. We knew where we came from and where we were going. However, as often happens in science, this turned out to be an illusion.
A century before, the double-slit experiment had overturned the corpuscular theory of light. Light was instead shown to be a wave, which explained the observed interference patterns. However, Einstein’s 1905 paper on the photoelectric effect turned the wave theory of light on its head.
We now accept that Light is composed of particles (photons) that exhibit wave-like behavior. Each photon is a discrete packet of energy (quanta), determined by the frequency of the wave. What Einstein did not envision was the implications of this discovery, which led to the famous quote, “God does not play dice”.
But as it turns out, with our present level of understanding, God does play dice. Consider the dual slit light experiment. What does it tell us about the nature of our universe when we view light as particles?
In the dual slit experiment, light from point A is shone towards point B. What we find is that the individual photons will go through slit 1 or slit 2 to reach point B, but there is no way to determine at Point A which slit (path) the photons will choose. And equally perplexing, there is no way to determine at Point B which path the individual photons will arrive from. Relabeling the slits as paths we have:
This property is not confined to light; it can also be recreated with other particles. The implications are profound. Point A has more than one possible future, and Point B has more than one possible past. Rearranging our double slit experiment so that A and B coincide with the Present, we end up with:
Which we can simplify:
Our Victorian Era picture of one future and one past is no longer correct. Our deterministic view of the world now becomes probabilistic. Some futures and some pasts are more likely than others, but all are possible. Our common sense notion (theory) of one past and one future does not match reality, and when theory does not match reality, it is reality that is correct.
Now you may say, well that may be true for very small particles, but surely it doesn’t apply to the real world. Consider however, that in place of a particle, we used you the reader.
Let point A be your office and point B your home. Some days you will travel from the office to home via path 1. On other days however, maybe you need to go shopping first, or meet friends, or your car may break down, or any number of activities may require you to take path 2 to reach home. So you take path 2.
For all intents and purposes your behavior mimics the behavior of a particle. An outside observer will not be able to tell which path you are likely to take. To an outside observer your “free will” is no different than the behavior of the particle. To the observer the reason for both behaviors is “unknown” or “chance”. It cannot be determined, except as a probability.
Chaos is routinely discussed when considering models. What does our double slit experiment tell us about Chaos?
Consider that instead of starting at point A, we start at A1. A1 is a microscopic distance along the path from A to P1. Or, instead we start at point A2, which is a microscopic distance along the path from A to P2.
From geometry, A1 and A2 will be an even smaller distance from each other than they are from A. They are less than a microscopic distance from each other, yet they lead to different futures. At A1 you can only travel to P1. At A2 you can only travel to P2. Thus with a less than microscopic difference in “initial values” we get two different futures, neither of which is wrong.
But wait you say, ignoring that P1 and P2 are in A’s future, they both lead to the same future. They lead to B. But in point of fact, B is only one possible future. We purposely kept the diagram simple. Reality is more complex. From points P1 or P2 the particle may travel to a whole range of futures. (thus the interference pattern of the double slit experiment).
And this is what we see when trying to forecast the weather or the stock market. Very small differences in the values of A1 or A2 quickly lead to different futures. All the futures are possible; some are simply more likely than others. But none are wrong.
Climate Science and the IPCC argue that climate is different. Because climate is the average of weather, we should be able to average the results of weather models and arrive at a skillful prediction for future climate. However, does this match reality?
Climate science argues that future climate = (C+D+B)/3, where 3 = number of models.
However, climate is not the average over models. Climate is the average over time. Thus:
If we arrive at B via path 1, then climate = (A+P1+B)/3, where 3 = elapsed time
If we arrive at B via path 2, then climate = (A+P2+B)/3, where 3 = elapsed time
Since P1 <> P2, even though we have arrived at the identical future B, we have two different climates, none of which resemble the IPCC ensemble model mean. And this only considers future B.
Futures C and D are also possible, with different probability. We will arrive at one, but there is no way to determine in advance which one. Thus for a single starting point A, there is an infinite number of future climates that are all possible. Some are simply more likely than others.
Thus the failure of climate models to predict the future. The IPCC model mean predicts B, simply because it happens to be in the middle. However, this is simply accidental. As the “Pause” demonstrates, nature is free to choose C, B, or D, and in the real world nature has chosen D. As a result the models are diverging from reality.
In reality the models are attempting an impossible task. There are not simply 3 futures and are not simply 2 paths; there are for all intents and purposes an infinite number of futures, and an infinite number of paths. All are possible.
Some futures are more likely, but that is simply God is playing dice. We are not guaranteed to arrive at any specific future, thus there is nothing for the climate models to solve. They are being asked to deliver an impossible result and like Hal in 2001 they have gone crazy. They are killing people by cutting life support via energy poverty.
160 thoughts on “The Future of Models
1. Breathtaking! Thank you thankyouThankYouTHANKYOU for putting my thoughts into words.
This is the best statement of the n-Body problem, the simplest and most coherent, from the perspective of Quantum Mechanics, that I have ever read. AND invoking basic Chaos Theory! And I had to pay almost $300 for my copy of Unified Field Theory!!
Beautiful! Just gorgeous!!
2. I disagree with probabalistic reality. Whenever I see someone claim that determinism is lacking, my response is that it is always the case, in the end, that human’s ability to understand is what’s lacking, and probabilistic models are advanced to explain out of ignorance rather than knowledge of reality. Probabilities explain nothing and are clung to by folks who facetiously believe that if they can’t see the deterministic causes, they must not exist. That’s called gigantic egoism. Or just mental laziness. The human mind , as we can all clearly see, is not particularly impressive as a thinking organ.
3. Oh, and saying what many of us have been saying for a long time now: the use to which the IPCC and it’s cohorts of “scientists” promoting AGW is on a par with TV News commentators interviewing each other about “what it all means” when they don’t have any real information to report. So they draw guesses from each other, and that becomes the news… So a group of programmers program everything they can think of into a group of programs mean to simulate the real world’s climate.
The claim that by averaging the results, we know anything more about the real world than our guesses and assumptions going into the programs themselves is sheer fallacy. It is nothing but a computerized circular argument.
Again, beautiful! Thank you!
4. @ Col Mosby:
Not to “ring and run” because I’m not long for this thread, but I cannot agree with you about your rather sour assessment of the human mind, simply because there are people who don’t agree with you. The very range of excuses that people come up with to rationalize the lack of warming over the last 17 years is an impressive exercise of thought—-put to silly purpose, but still, impressive, from a certain perspective. And the theories you disdain are incredible edifices of thought. And the fact that we use probabilities to describe a thing is the other side of the coin your claim that we simply don’t understand the deterministic causes. You’re correct. Have you ever seen an electron? Held a photon? Can you comprehend the sub-atomic particle we call a quark? Or the characteristics we call spin? Do you think of it as a very very small marble, perhaps?
The human mind is an incredible miracle. Even yours, closed as it appears to be.
I’m honestly sorry I can’t stay to debate you, perhaps another time. I’m sure others will take it up. I mean you no disrespect and say none of this with sarcasm; I simply disagree with your assessment: both regarding the probabalistic nature of physics, and your assessment of the human mind—
Be well…
5. In response to Einsteins “God does not play dice.” Bohrs response was “God can do what he likes.
6. That we cannot, yet…or perhaps ever, predict the behavior of everything in the Universe, does not mean it cannot be predicted at all.
7. There’s something awry with the argument here. It’s reasoning by analogy. Complex macroscopic phenomena are not the same as quantum-level particles. I suspect neither mechanics nor probability can easily describe climate dynamics. But I will leave it to more erudite commenters to explain why.
/Mr Lynn
8. This topic reminds me of one of my favorite quotes by William Gibson, in “Pattern Recognition”
( here he is referring to our cultural future, but the problem is much the same)
Hubertus Bigend, Pattern Recognition, pages 58–59.
9. I think the concept of emergent phenomena is a better explanation of why the computer models are not working. If the model does not properly include all the phenomena relevant to the process being investigated — perhaps because we have inadvertently overlooked some of them or do not understand all of the important cause-and-effect relationships between them — then the model cannot possibly be reliable.
10. Col Mosby says:
March 11, 2014 at 8:28 pm
I disagree with probabalistic reality…and probabilistic models are advanced to explain out of ignorance rather than knowledge of reality. Probabilities explain nothing and are clung to by folks who facetiously believe… Or just mental laziness. The human mind , as we can all clearly see, is not particularly impressive as a thinking organ.
Ok, so we now have the gift of your personal philosophy. Thank you.
This is a science blog…without an articulated theory, and some data to demonstrate it represents reality as predicted by your theory, you have bupkiss. Zippo. Nadda. Nothing.
Quantum mechanics still rules (it may not rule forever, but your “philosophy” didn’t even put a dent in it).
11. “At the close of the 19th century physics was settled science.” That’s a myth. There is no evidence that Kelvin said that “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”. Physicists knew they couldn’t reconcile Newton with the constant speed of light. When Einstein showed up, he was rapidly accepted. That’s because physicists were scientists. There’s no comparison between them and the climate change nuts.
12. So the basis of chaos theory is explained by physics? – or does it have an axiomatic mathematical foundation?
13. Not even wrong.
Models are averaged not because it makes sense.
Models are averaged not because it is statistically justified
Models are averaged not because it physically represents something.
Models are averaged because the average is closer to observations than any single model.
It is a practical hack. Like renormalization. Except its nowhere near as good as that hack.
14. The climate models particularly funded and publicized are matched to a hockey stick rewritten version of temperature history plus false custom-fudged aerosol forcing histories used as a fudge factor.* They don’t fit the actual past, so of course they can’t fit the future either.
Any true scientist not passing through the dishonesty filter, by heavily speaking out against such, would be an ostracized skeptic, so those models are made by activists always giving the activist-desired warming prediction, consequentially diverging from reality later.
However, to the degree solar activity could be predicted (guessed), the climate future could be predicted approximately in decadal scale, not by averaging X activist models but better by 1 decent one.
(Some illustrations of revisionism and what can be seen without it are in my usual ).
15. Climate Models are inherently useless. See
The IPCC models are doubly useless being structurally wrong with the built in assumption that CO2 is the main climate driver.
It is well past time for climate scientists to abandon their pipedreams and base their forecasts on a different method.
For forecasts of the timing and extent of the coming cooling based on the 60 and 1000 year periodicities in the temperature data and the neutron count as the best proxy for “solar activity” see several posts at
16. Just because we cannot determine an outcome does mean that chance is the determining factor. A particular photon will only do one thing and it will do that because of the net effect of the complex vectors that will interact with that photon, the context if you will. We cannot determine the outcome of a photon because if we focus on the photon we loose site of the context and if we look at the context, we loose visibility of the photon. At the speeds we are dealing with any attempt to go from the context to the specific wil be too late and the context will have changed again. It only seems random because we cannot perceive the causality factors in the time frames required to predict the outcome before it has occurred.
Given this limitaton we have gone to probability to make some sense of the outcomes we perceive. It is a rational way of dealing with the problem, but that does not mean that anyone is throwing any dice to determine outcomes.
17. Col Mosby says:
March 11, 2014 at 8:28 pm
………..The human mind , as we can all clearly see, is not particularly impressive as a thinking organ.
18. wbrozek says:
March 11, 2014 at 9:00 pm
Stephen Hawking at:
That was back when Hawking believed in black holes. Times change.
19. Therefore there is no such thing as a nuclear power plant, a DVD, gene therapy, etc, etc.
Sorry but this essay is not enlightening in a fundamental way. I do not doubt that the models are inadequate. If they were adequate, then over the years they would have converged more than they have.
That the models are not evidence that modeling itself is the wrong way to proceed. Maybe the climate can never be modeled or maybe the models we have got so far have got the wrong values for key parameters, or maybe where some variables are now parameterized they must instead be simulated.
What needs to be proposed for us to abandon modeling is for some other approach to be proposed and adopted that will be more productive. Otherwise, the modelers will proceed to refine their models because there is no other way to proceed.
My criticism of the models is somewhat different. I criticize the modelers and the funding agencies because the funding agencies have funded modelers who propose to prove the theories that the funding agencies want proven. If modelers were objective scientists exploring the unknown in an unbiased manner, they would not be funded..
20. Frederick see my post at 9:34 above for a better way to proceed- simple, reasonable, transparent, inexpensive ,testable in a fairly short time frame and likely skillful – therefore unlikely to appeal to the climate science establishment establishment – fewer jobs for the boys.
21. Very interesting. We could argue that photons do not exist, as some are inclined to do.
Donning my coyote headdress, prancing, rattling my gourd, I want to say that chaos is not God playing dice. It is simply what we do not understand.
When some clever person designs another experiment or computational bandwidth increases to a point where the photon trajectories can be parsed between the two slits based on prior interactions, it will no longer be chaos.
22. 1. Climate is not the average of weather, the climate function is an integral of the weather function.
2. Models in physics are good, but are supplanted by models that are better. Newton’s formulations are still valid and useable, but the newer ones based on quantum mechanics are better. Models of the climate have yet to reach the point of being anywhere near good or valid..
23. gymnosperm, its been tried. If you use an electron, which also has wavelike properties, instead of a photon, you can use a magnetic field to detect which slit the electron travels through.
Here’s where it gets really weird though. If you cover the detector with a hood, so nobody can tell which slit the electron goes through, you get the zebra pattern caused by quantum interference – the electron behaves like a wave.
If you remove the hood, and watch the detector output, you get a machine gun pattern – the electron behaves like a particle.
Some people have suggested you could turn this into a temporal communicator. If instead of covering the detector with a hood, you sent the image from the detector to Jupiter, then bounced it back to the laboratory, in theory you could make the decision whether or not to view the detector several hours after the experiment – so by watching the output of the experiment, you could receive a message from the future.
Some serious attempts have been made to build such a time communicator – but so far, none successful to my knowledge.
24. Climate will always be an arbitrary set because it is dependent on arbitrary selection of a start and end date. Any particular future day (or any other period of interest) has a 1 divided by the-number-of-days-included in-the-set-of-the-climate-period probability to fall outside the range of values incorporated within the climate period.
As long as we all understand the limits of certainty provided by such analysis & methodology, everything is hunky-dory. But humans often don’t work like that and too many scientists, especially ‘climate-scientist’ refuse to understand their limitations, heralding their probabilities as certainties.
Of course it helps if scientists actually have accurate data to base their estimation of climate during any arbitrarily selected period. Unfortunately this is STILL not the case, since the required replicated, random samples are STILL not being taken and inappropriate statistics employed. Garbage in, Garbage OUT.
There is always the possibility that Dr. Ellen Weber had it right when she concluded that Global Warming caused smaller brains….
25. Models are just models. The empirical real world is the correct model with all variables in the mix. To say that a model is in any way superior to observation is hubris. Excellent thought provoking post.
26. Col Mosby says (March 11, 2014 at 8:28pm): “I disagree with probabilistic reality. [And so on and so forth]….”
Well, yes, I guess that none of us like the concept of a universe that is fundamentally, irrevocably and absolutely random but that seems to be how the quantum world really operates. Einstein didn’t like it, most physicists don’t like it, I don’t like and even God may not like it – but, unfortunately, Reality seems to favour it.
Of course, in the 110 or so years since the birth of Quantum Mechanics there have been many attempts to construct Hidden Variable Theories in which say that even if Man doesn’t know what the rules of the game are, at least God knows. Sadly, each attempt is overthrown with papers (notably Bell’s theorem) which say that, no, even that weak form of determinism does not prevail: even God doesn’t know the rules of the game and, what’s more, can never know. This claim is somewhat unsettling.
Now, Bell’s Theorem isn’t the last word on the matter and, every now and then someone makes a brave attempt to re-introduce determinism into mechanics – in either a weak or strong form. On each occasion, a theoretical paper is published that counters those attempts. The current state of affairs is a little murky, and the jury is still out, but the room for manoeuvre for those proposing determinism in Quantum Mechanics is now – after a century of effort – very, very limited.
The consequence of this is that, operationally at least, we are forced to accept a description of a universe that is not only intrinsically random, but also one that permits a ‘spooky action at a distance’.
27. Eric Worrall says:
March 11, 2014 at 10:20 pm
Interesting comment.
I’m enjoying reading the comments here but the discussion is way over my head. I don’t know exactly how an “image” of a photon or electron is captured to be described as a wave or particle.
If I were to guess that an image were to be taken or interpreted to be from the side of the direction of travel and it sometimes seemed like zebra stripes (wave) and at others like a machine gun (particle) my wandering mind questions if the trajectory could be something like a corkscrew.
Like I said this is way over my head but if I don’t try to follow along the only certainty is that I will never understand.
28. Climate models wouldn’t be so bad if any one of them actually got it right.
But it’s difficult to get the model right if the scientist is not willing to rethink his forcings and feedbacks.
Instead what we get is ensembles and excuses.
The Met Office was already into spin control with poor weather predictions right before Climategate. Matthew Collins had written, but was published much later, on a paper called,
“Climate model errors, feedbacks and forcings: a comparison of perturbed physics and multi-model ensembles”. It was submitted on Sept, of ’09.
The very first line explains what I mean:
“Ensembles of climate model simulations are required for input into probabilistic assessments of the risk of future climate change in which uncertainties are quantified.”
Which in my opinion is a crok of dung.
Now lets fast forward to today. Two weeks ago, Gavin Schmidt wrote a piece that got published in Nature Geoscience called, now get this, “Reconciling warming trends.”
I love this word. Reconciling. So Gavin wants to find a way to make two very apparent truths, seem similar, and his only recourse was to suggest it was, “Conspiring factors of errors in volcanic and solar inputs, representations of aerosols, and El Niño evolution…”
His camp of alarmists friends must be cringing at these words. How often has it been stated that these factors he has mentioned play an insignificant role in climate models. Yet, here we are.
So I posted a comment on RC, which I stated,”Gavin, is there anyone in your department considering adjusting models to contain feedbacks,(pos or neg) that were previously ommitted? If the models fail to follow observed trends, its not the science that has failed, just the models. Could be time to re think the feedbacks.”
To which he responded,”Feedbacks are emergent properties from the models and so are diagnostic, not input. We are rethinking processes all the time in order to better represent the real world, but as yet, they have not much changed the main feedbacks. In any case, it would not be possible to ‘fix’ feedbacks to change responses just for one decade without changing responses in all other metrics”
In other words, he refuses to change the negative feedbacks because the last decade of flat-lined temperatures is an outlier. Yet, he is willing to go on record and say that the recent pause may very well be the feedbacks he refuses to adjust in his model. How typical.
I am a little bit confused though and was wondering if anyone would attempt to help we with something he wrote. Gavins wrote feedbacks are diagnostic, not inputs, Can someone tell me just exactly what kind of nonsense is he trying to shovel me?
29. Quite a lot of commentary bulldust being kicked into the air which clouds the issues quite significantly which lots of bulldust hanging around is inclined to do.
We haven’t, we can’t and we are never likely to be able to accurately predict the future whether from chicken entrails or mega million dollar computers manned by hordes of munificently salaried, many lettered climate model fiddlers.
Climate model fiddlers who claim they can or will be able to predict the future of the climate with their models, except achieving that feat by a chance encounter with future reality, are either hopelessly besotted with their own self percieved omnipotence and are incapable of doing any serious self examination of their real motives or are liars,[ a small number of whom have demonstrated a strong tendency to being pathological liars. ref; Climate Audit’s most recent posts ] with a good probability of entertaining some grievous intentions against the public purse and the citizens who fund their activities
30. Now I’ve got this picture in my mind of a photon (light) traveling in a corkscrew manner. A shorter wavelength being like a finer thread than a longer wave length (course thread). If the amplitude (height of the wave or dia of corkscrew) were the same and it impacted water (say ocean or lake) it would seem that the longer wave length would travel deeper as the photon would encounter fewer water molecules per depth traveled.
Another thought considers the possible properties of that proton when bouncing off or reflected away something. Would the frequency or wave length change?
One more thought. If this photon traveled in a corkscrew manner would it be more/less susceptible to influences of forces such as gravity by reason of a varying distance (due to corkscrew trajectory) from that attracting force even though quite small? That would probably need to assume that a photon has some form of property similar to mass.
I’ll probably loose sleep worrying about these things I don’t understand.
31. I commend the article as an explanatory analogy of QM to climate models, but I cannot recommend it as any kind of argument that should alter our belief about said models. I say this even though I agree with the article’s conclusions about the models. The reason basically is that the QM explanation is far too simplistic and in some parts flat out wrong about QM. Unfortunately the proof is too long to fit in the margin of this weblog. ;-)
But briefly, you simply cannot talk about merely not knowing where a particle is or was. Within the bounds of the uncertainty principle, it is much closer to the truth that the particle simply doesn’t have a precise location. And even that overlooks a book’s worth of subtleties. QM is far more remarkable. If I decide now to do a test of an object’s position some time ago, I will find a position and the object will have behaved – in the past! – as a particle. But if I, now, decide to check up how it behaved as a wave previously, lo! I will discover wave behaviour and the object will not have had particle-like qualities in the past. This violation of apparent causation is too much for many people, and some physicists avoid brain strain by saying that QM is merely a calculating method for finding out how things behave. Lots more to say, but it would digress too much from the article topic.
32. eyesonu says:
March 12, 2014 at 12:33 am
Hmmm, photons are funny quantum mechanical things. Now, I have no idea what a photon looks like – in fact, no one does – but it is probably a mistake to think that a photon has a trajectory and that it may somehow corkscrew. These are classical concepts that do not carry over well to the quantum world.
In fact, things are so storage that in an orthodox quantum description of a photon, it would be quite reasonable for me to say that a photon is here; it is also over there; and that it is, in fact everywhere all at the same time (and, in fact, at all times). It is also true to say that a photon is not a particle; nor is it a wave; and yet it is both. This acute ambiguity about the ‘when’ and ‘where’ and nature of a photon lies at the heart of the quantum mechanical paradoxes of the double slit experiment.
The nature of the dynamical ambiguity of a photon is really only addressed within the arcane mathematical languages of Quantum Mechanics and Quantum Field Theory. And, for the most part, physicists only really understand the concept of a photon within the context of manipulating mathematical symbols in those languages.
In short, the reality of quantum is so bizarre that we simply have no good way of visualising a photon.
33. Chaos theory and QM are different phenomena. I think you’re comparing apples and Oranges. You provide one interpretation of QM. It’s not the only one. I personally find Ian Stewart and Jack Cohen’s analysis to be cogent, and more consistent with the first law of thermodynamics. I was intellectually excited the first time I saw the ‘many pasts’ version of QM proposed, but I am not satisfied it is a valid description of reality. I freely admit I don’t have the education and skills to debate QM, but I am not persuaded that your argument does either.
34. The history of Anglo-Saxon societies (especially the UK and the US) is the continual creation of alarms, scares, irrational exuberance and over-the-top depression. It’s how market volatility is created, which is the Father in the Financial Services Holy Trinity of Volatility, Information Asymmetry and Societal Ostracism of those who expose Illusion.
The creation of new arenas of volatility is called ‘innovation’. It’s very innovative to steal all the hard-earned savings of humble folks on main street through trashing the thrifts in the 1980s. It’s very innovative to create cartels of sports franchises which give the illusion of competition but are a focus for online global gambling. it’s very innovative to egg up the ICT industry around the millennium and then let it crash down again. Ditto with clean tech, biotech and graphene tech.
It’s very innovative to say that the Russians are megalomaniacal global domination freaks whilst retaining the absolute dominant position in military hardware spending.
It’s very innovative to make paying more for less the desired model for society, with communally owned lower-cost solutions being ostracised as the root of all evil.
It’s very innovative to call a duopolistic Broadway show called American politics where a small number of families are funded by a small number of big money families to play at democracy and then tell the whole world that they have nothing to learn from truly democratic systems about democracy.
Climate science is just one manifestation of this mafia Holy Trinity. Create a scare, control the media, ostracise the honest witnesses, create speculative investment bubbles, call the top of the market, change the shades of grey in the narrative, start another bubble in the energy sphere etc etc.
That’s all this thing is.
Core American Dream politics.
Main Street might not call it the American Dream. It’s not a dream I would go to America to pursue.
But that’s what the true American Dream is all about. It used to be a niche, now it’s the market leader.
Get with the program, folks.
You didn’t vote for Ralph Nader or Ron Paul.
So this is what you get.
35. Space-time trumps quantum mechanics. If quantum mechanics multiplies the number of possible pasts and futures, space-time falsifies the idea of time’s arrow. It falsifies what we commonly understand the past and future to mean. Time’s arrow makes no more sense than height’s arrow, width’s arrow or depth’s arrow – all are bound.
The question becomes why do we experience the world, through our consciousness, as though time’s arrow is real; and not the way it is as a 4d space-time continuum?
36. My take on models is first they are approximations of reality not reality itself. Some models in science are better describing observations than others and as a result allow to predict future behavior with varying degrees of confidence. Now to construct a good model there first must be good quality data to use. From what I have read, the climate quality is erratic at best and of very limited scope. This should be a flag that the climate models are likely to be inaccurate and new data is likely to diverge from the model at some point. I trust quality data before any model in all areas of science, not just climatology. The problem to me is money is available for the theoretical models but not for developing quality data. The cart is before the horse.
37. Its well known any retail trader who tries to model the financial markets using past predict future and place bets on them will go bust. For the retail trader the market is a ‘black box’. The best they can come up with is probabilities which may or may not give you an edge that creates money. Those who make money in the markets are not using probabilities. They are using inside information or manipulation. They are ahead of the curve not behind it. They are making facts not creating averages out of them.
So if one does not know what the facts or processes are [black box] one can only use probabilities to dice outcomes. So when i hear the use of probabilities then for me its because the real facts are not known. Its admitting its a black box. If climate processes are not known then its ok to use probabilities [what else] but that highlights a gap in knowledge of the processes?
The co2ers are dice players betting on co2 is the dominant factor and insisting everyone else bet on it as well. For the climate prediction dice players they are admitting, for them, climate is a black box. No certainties with black boxes.
38. A point source is a mathematical concept. You can’t achieve it on earth with our current technology. Any source of photons is vast, even if microscopic to our eyes. I defy anyone to produce a single photon from a single position repeatably. Molecules are not spherical and frankly I don’t know which part of the molecule emits and in what direction. A slit is enormous compared to a photon’s ‘size’. The reason outcomes are random is because we haven’t measured things adequately.
39. Alex says:
March 12, 2014 at 2:25 am
I must be bored today: I can’t resist.
Yes, a point source is a mathematical concept. But, then, it’s not really correct to say that a photon is a point particle.
I’m not sure that there is any real meaning to assigning a ‘shape’ to an atom or molecule.
In the double slit experiment, a photon is typically prepared in a state with well-defined momentum and therefore, by Heisenberg’s uncertainty principle, a very poorly defined position. Because of this, the slit is not enormous in comparison with the ‘size’ of the photon. In fact, if anything, it is the other way around: the slit is probably ‘small’ in comparison with position probability envelope for a photon that has been prepared in a well-defined momentum state.
And, no, outcomes of the double slit experiment are not random because things haven’t been measured adequately. They are random because, at the quantum level at least, Nature seems to be fundamentally random.
40. Momentum is not position. Without looking it up somewhere a photon is nowhere near the size of a slit. I am designing a spectrometer at the moment and 0.1 micron is not close to a photon’s size. Molecules are all shapes and sizes and their spatial orientation is random. The position of a single photon , even with a cyclotron, would fall into a probability curve ie not exact. You can’t get a photon coming from exactly the same point. Sub nano reproducibility is BS.
Fall back on the attitude that it is random because nature said so. So convenient. You don’t have to look closer then. You don’t have to build the Hadron collider.
41. the climate modellers are also being asked to do a political task
“In climate change, however, the political class has deferred the choices to the scientists to make – and this means taking the choice away from us. Politicians want a strong, simple story of “certainty”. As one British civil servant wrote to a leading climate scientist in 2009:
In other words, scientists were being asked to perform a propaganda function, while the politicians retained the luxury of passing the buck. Some scientists eagerly stepped up to the propaganda role – yet the task made other scientists queasy. One climate boffin, Peter Thorne, privately fretted the same year:
Yet another climate scientist another admitted the “evidence” the politicians were demanding simply wasn’t up to snuff, writing: “It is inconceivable that policymakers will be willing to make billion-and trillion-dollar decisions for adaptation to the projected regional climate change based on models that do not even describe and simulate the processes that are the building blocks of climate variability.”
basically co2 is another iraq dossier where possibilities are sexed up as ‘facts, consensus and settled science’.
42. The climate models are like the double-slit experiment. When you look for the heating in the atmosphere, it’s hiding in the deep ocean. When you look in the oceans, it’s hiding in the atmosphere.
43. “HAL if all this science is settled what is our future?”
“I can not foresee the future but I do know that at present rate of tax, all of your energy prices are necessarily going to skyrocket.”
44. Alex says:
March 12, 2014 at 3:14 am
Alex, you sound upset. Sorry to hear that. Good to hear, though, that you are building the Hadron collider.
Not sure what wavelength of light you had in mind but I note that the wavelength of visible light (mid spectrum) ~ 550 nm = 5.5 x 10^-7 m; and 0.1 micron = 1.0 x 10^-7 m. It would seem to me that you would have to be up in the extreme UV end of the spectrum before the photon wavelength becomes much smaller than a notional slit width of 0.1 micron.
45. I’m not upset in the slightest. I’m not as sensitive about these things as some. A spectrometer is not the Hadron collider. You are just stirring sh*t. You seem to be implying that a 0.1 micron slit won’t allow some photons to pass. I guess I have to do more research.
46. The essay certainly shows how systems can be deterministic but non computable, which covers the chaotic nature of climate. But the debate about whether there are multiple futures or just one is far deeper than this essay concedes. The interpretation of quantum theory leads to 2 significantly different outcomes. The older Copehagen interpretation indicates there is only one outcome but is not predictable but probabilstic and at the other extreme is the Many Worlds interpretation which predicts all outcomes occur but at each point the Universe splits. Many Physicists and Philosophers are beginning to give credence to the latter interpretation.
The essay is certainly right in the statement that ” climate is not the average over models. Climate is the average over time.”
47. Alex says:
March 12, 2014 at 3:54 am
No, not saying that a 0.1 micron slit will not allow some photons to pass. (Some of) the photons of wavelength 550nm will pass quite happily through the slit.
48. Fascinating. Read cosmologist Lee Smolin on time as he hypothesizes a state-free universe as process. N. N. Taleb warns against bald induction that NEVER sees The Black Swan hiding in Mandelbrot’s fractally complex universe (of which time is only a hypothetical dimension).
49. Ahh, yes, another, this that I can provide an online link URL.
A. Albrecht and D. Phillips, Origin of probabilities and their application to the multiverse.
Along with João Magueijo, Albrecht independently proposed a model of varying speed of light cosmology to explain the horizon problem of cosmology and propose an alternative to cosmic inflation.
50. I’ve often said that climate science needs to move down to the quantum level and rethink everything because this is the level which energy and photons and molecules operate at.
CO2 intercepts a LW photon emitted from the surface, … Then what happens. Can one actually model what happens to untold numbers of these interactions every millisecond going on for centuries. NOT.
51. Albrecht’s argument is that probabilities are a subset of QM in the same way that Newtonian physics are a subset of relativity.
52. I found this article somewhat convoluted. Using quantum physics to make a point about the macro world isn’t really valid, even if analogies can be drawn. Better to just say that chaos by definition cannot be predicted, climate is based on a non-linear chaotic system, and the models and “projections” are all, therefore, bullshit.
53. Chaos is a concept. We call it chaos because we can’t measure it or understand it clearly.
The chaos of yesterday is not the chaos of today and the future. The mysteries of yesteryear are not so today and nor will they be in the future. Humanity needs analogies to understand things. Unfortunately we are running out of analogies to explain new things now. Old analogs don’t work and leave us confused so we resort to concepts like nature and god to explain things. Hundreds of years ago philosophy and science were interwoven. Then came a time when they were separated. Now its back to the two being together for our next leap forward.
54. I think closer analogy is that the weather is a nonlinear dynamic system, i.e., chaotic system and the climate is its attractor. Predicting the trajectory of the future weather becomes an intractable problem within a few days, but projecting the shape and position of the attractor in response to a pertubation (change in forcing) may well be possible. After all the climate is the predictable part of the weather system. As a matter of fact, I venture to claim that a 1960s geography text map of the climate types and zones will is a pretty good prediction of the climate in the year 2100, the climate zones may be shifted one or two hundred miles this way or that, but the overall pattern will be the same. The challenge for our models is to improve upon that projection, and given that they still have documented correlated errors several times the magnitude of the phenomenon of interest, models that might be able to project the climate are probably at least two or three model development generations away.
55. Right. Obviously a chaotic system with more than one forcing is impossible to forecast more than a few days ahead.
56. Thank you Anthony for the opportunity to post this article. And thank you, the Reader for your interesting comments. Many of the questions posed have been well answered by other comments and need no reply from me.
For those struggling, the subject matter is inherently confusing. It flies in the face of common sense, so we struggle. The aim of this article was to ease this struggle by showing the simplicity of the concepts.
On the question of Newton, the propagation speed of gravity remains an open question in science. Newton recognized the problem of action at a distance. If the Sun was to instantly disappear, would the Earth remain in orbit about the non-existent Sun for another 8.5 minutes?
On the question of deterministic chaos, by the time the particle reaches A1 or A2, its path to P1 or P2 is no longer probabilistic. Probability was resolved earlier at point A. What is mathematically fascinating about chaos is that even though the paths are deterministic, infinitesimally small differences between A1 and A2 lead to divergent futures.
On the question of Copenhagen vs many Worlds, this is also an open question. Is there a deeper reality, in which probability will give way to determinism? In an infinite Universe all things are possible. Intuition tells me that determinism is incompatible with free will.
57. First turns out Einstein was not entirely correct about the photo electric effect. If you don’t beleive this make an interference pattern on a solar cell and it does produce a voltage. Second just because one does not know what every particle in a system is doing doesn’t mean we know nothing or cannot predict anything about the system as a whole. That is why we can input energy into a system and predict the temp without knowing what every molecule is doing.for more complex systems this takes more understanding. With respect to climate models the problem is not that we won’t be able to do this the problem is at the moment we can’t but we are acting as though we can and then using this psuedo knowledge as a bludgeon to advance a political agenda.
58. Alex says: March 12, 2014 at 6:42 am “Chaos is a concept. We call it chaos because we can’t measure it or understand it clearly.”
Chaos is a label and a label is not the thing. Chaotic complexity, as “fractally complex”, does not have a meaningful measure and its understanding would be the understanding of all.
59. “God does not play dice” and the Clockwork Universe are still unsettled questions. The reason for this is a lack of understanding of the meaning of probability theory. Most people, including scientist, believe that a stated probability is a measure of the state of nature. This is not quite right.
To explain consider this thought experiment: You flip a coin in a classroom placing it on a table with your hand over it. You ask the class what is the probability the coin is heads up. Everyone answers 50%. You agree.
Now, you alone peek at the coin and ask the class again what the probability is. They again answer 50%, but your answer is not in agreement with the class. Why?
Because: probabilities are not just a measure of the state of nature, but a measure of OUR UNDERSTANDING of the state of nature.
Hence, Quantum Mechanics does not overturn the Clockwork Universe concept. Just because we humans cannot predict the outcome of experiment does not mean that it is not predetermined.
Note: I am not arguing that we live in predetermined universe. I am just arguing that Quantum Mechanics does not settle the question.
60. The point of my ‘waxing lyrical and philosophical’ was to open the mind to other concepts and to break the barriers of established beliefs. I don’t automatically accept the words of people in cassocks, turbans or lab coats. The initial experiment/discussion of particular phenomena at the beginning of this post is what I have issue with. I have no problem with the conclusion.
To my mind there is chaos and order ( I love Michael Moorcock)
Chaos is the unknown and order is the known.
61. Alex says: March 12, 2014 at 7:32 am “My point exactly. Complexity is not insolvable if one has the tools.”
And my point is precisely to the contrary. There are no tools in principal to solve the adequately and realistically complex. Imagine a Planck’s scale-like granularity to everything and an infinite dimensionality.
I commented on the folly of connecting the dots of any epistemological mapping and was shortly convinced that there is no natural scale to such a mapping and an infinite number of points between any two dots.
62. The only people that I know who have solved the riddles of quantum mechanics are master surfers. They cling to their boards atop mountains of water and survey hundreds or thousands of points as they search for signs of the wave. Almost magically, some points become a line and they are catching the wave. I asked a master surfer about the role of quantum mechanics in his professional life. He replied, “Quantum physicists don’t surf.”
63. In her essay, Nancy Green provides us with a graphical illustration of an entity that does not exist for modern climatology. This entity is an event.
This becomes clear when the circles labeled B, C, and D are identified as the outcomes of an event and when the circles labeled P1 and P2 are identified as the preceeding conditions. The conditions are observable in the present. The outcomes are observable in the future. The outcomes and conditions are examples of states of nature.
The mapping from the conditions to the outcomes, indicated in Green’s graphic by arrows, is an example of a predictive inference. A “predictive inference” is a conditional prediction. Conversely, a “prediction” is an unconditional predictive inference. Contrary to popular opinion, the “projections” of the IPCC climate models are not examples of predictions.
No climate model of AR4 or AR5 makes a predictive inference or predictions. A predictive inference and predictions are, however, essential ingredients for the methodology of climatological research to be made scientific and Earth’s climate to be made controllable.
64. Patricia ( says,
“Complex idea very clearly explained for the non-physicist.”
As a former physics student (30 years ago) I wish I could say the same, but the article makes no sense to me. A deterministic, classical mechanical system might in theory be predictable but it can still be chaotic and, for all practical purposes, unpredictable. On the other hand an indeterministic, quantum mechanical system might be intrinsically unpredictable but this doesn’t require it to be completely unpredictable otherwise (for example) we would not have been able to make the technological progress we have, in the last few hundred years, using deterministic classical physics. So it’s pointless to argue that the climate models are useless because the world is indeterministic, even though the world is indeterministic and the climate models are useless.
65. interesting post but the author is wrong when he says photons are either at point A1 or point A2. in quantum theory, unless observed, the particle is both in point A1 and point A2. that’s how you get the interference pattern when only a single photon goes through the slits at the same time.
also, chaos theory does not derive from the uncertainty of quantum physics. both create uncertainty, but chaos theory is derived entirely from mathematical equations. if mathematical equations exhibit certain behaviors, they are chaotic. if they also represent a physical system, the system is chaotic.
66. ” Martin Lewitt says: March 12, 2014 at 6:43 am
………but projecting the shape and position of the attractor in response to a pertubation (change in forcing) may well be possible.”
So is the jetstream an attractor? It certainly defines the weather over Europe depending on its position. But is it predictable or is it part of a hierarchy of chaotic attractors?
I suspect the latter ie although it is a major influence on weather in Europe how it behaves may well show no pattern ie it’s own time evolution is chaotic. Hence European climate and climate change would be unpredictable no matter how big the computer or sophisticated the model..
67. Lovely essay Nancy Green, very direct and easy to read and comprehend. My argument with CAGW models and prognostications is that there is no inclusion of human factors such as geo-engineering activities and no factoring of radio / directed energy frequency emissions such as HAARP, radio telemetry, radar array’s and cell tower output . When I asked Dr. Hansen at one of his presentations if his models included artificial cloud forcing technologies and scaler wave directed energy broadcasts his response was interesting. He said “No”…. climate models do not include any radio or directed energy broadcasts or geo-engineering cloud forcing tech (i.e. chemtrails) into account. He did say that it probably does have some impact and maybe should be included in the modeling calculus. Lastly, God playing dice , or God can do what ever God wants…. I for one hold the opinion that that the atmosphere is a living system as is the ocean, the bio-sphere we live in as well as space …. Life being the rule not the exception in the universe. Part of the problem with modeling of living systems such as the atmosphere and oceans is that we can not take into account living synthesis and intelligence of complex living systems. Thus unless unpredictable behaviors of a living atmosphere can be completely and utterly controlled, then all the modeling in the universe will always be subjected to great uncertainty and unpredictability. In this writers opine.
68. Well, I’ve got to admit, that I actually learned something from the comments section today
– that Steven Hawkins has something new to say about Black Holes
– not that they don’t exist, just that the Event Horizon, is, apparently, just the Apparent Event Horizon
– interesting stuff, if only I could figure out what he means…
Anyway, as for the main article
– it’s nonsense!
– confusing quantum physics with chaos theory with simple Newtonian physics
– I have to say, ‘What is the point?’
Why not apply some basic filtering to the articles published on WUWT??
– don’t just publish any old nonsense!
Why not apply some sort of peer review process, where someone with at least an inkling about physics or science looks over articles before they are given the WUWT seal of approval??
This kind of article is just such a waste of everybody’s time, and does nothing to further the AGW debate…
69. “All the futures are possible; some are simply more likely than others. But none are wrong.”
Cute, but a theoretician’s mistake. If my model was that casting chicken bones and entrails would allow me to predict the future, it may end up correctly predicting the future but it is not related to the probabilities you are talking about. Moreover, if it ends up not predicting the future, then I can take comfort that it was a possible future. I’m sure you intend that scientific constraints reduce the possible futures.
70. Nancy Green – as p@ Dolan says, a beautiful exposition of a crucial problem in physics, one that even a layman like myself can understand – and one that, as I see it, has applications in economics as well as physics. Bravo!!
71. One hopes that Col Mosby, and those in agreement with him, would agree with a few basics about science:
1. Scientific models are understood to incompletely describe the real world, but are useful to the extent they allow for accurate prediction of events in the real world.
2. Quantum Mechanics as a model has been extremely accurate over a broad range of experiments in predicting outcomes in the real world.
3. At the quantum level, probability functions have worked better, by far, than strict mechanical determinism at predicting the outcome of experiments.
4. Claiming that, underneath it all, there is actually a fully determined real world with no dice-throwing is not supported by experiment. Such a claim is a statement of faith, a “religious-type” claim, not a scientific claim: maybe true…maybe not…no way to know. To argue that those who disagree with such a claim are not intelligent is ridiculous and self-discrediting.
72. I call this the Unique Solution Syndrome. A) all problems have one,unique, correct or “best” answer. By axiom A, B) once you have found an answer, that answer is “the” answer, and it is inappropriate to continue looking, and C) by definition (axiom A), all other proposed answers must be wrong or inadequate.
This is the Syndrome of the deterministic Engineer mind that needs to have the whole world nailed to the floor. Uncertainty is the enemy of the man who sees his self image in control. If he can’t predict the future, he can’t adjust it to suit his wants, and so is not in control. He is nothing but flotsam in the world.
Of course he isn’t, but this is how he feels. The USS sufferer seeks to wrest certainty by the standard true and manly way: force. The academic destroys the reputation of his detractors, the general throws more troops at the redoubt, the intelligence officers spy on EVERYONE, and politicians regulate what they can’t legislate. All to demonstrate they are men worthy of the term: people who can determine outcome, any outcome.
The Unique Solution Syndrome is what Michael Mann displays. The USS is whsy sank the Titanic.
73. NotTheAussiePhilM:
You’ve drawn the wrong conclusion about the merits of Ms. Green’s article. Rather than being nonsensical, her article is meritorious.
A logical concept ties Newtonian mechanics together with quantum mechanics and chaos theory; this concept is missing information. Like today’s climate models, Newtonian mechanics assumes that information for a deductive conclusion about the outcomes of events is not missing. Quantum mechanics and chaos theory do not assume this information to be missing. Contrary to the assumption of IPCC-affiliated climatologists, the climate system is chaotic and information is missing.
74. Never mind the slit experiment, I find it interesting that with the photons apparently criss-crossing each other from my two computer screens at angles to each other, the words and images are clear and uninterfered with from both screens and if I look over my shoulder into a mirror, I can see the television in the next room. Each of the photons are following a perfectly forecast track in the real world. I admit that I wouldn’t get the same impression if I was trying to view any part of my field of vision through a slit. Mind you if I do put slits (pinhole size) in front of my eyes, I can see in perfect focus the immediate foreground and the distant background, even though my eyesight is not that good. I made a pair of cardboard pinhole glasses once when I had forgotten my specs and wound up scaring the hell out of an unexpected visitor whom I turned to greet, my face looking somewhat like that of a chameleon. Now tell me, with the same photons from the foreground and background hitting my eyes, how come I can discern them so much better with pinholes or slits. Also explain the criss-crossing photons that almost intelligently bring me an image of a small letter on each of two different screens? They don’t choose alternative probabilistic paths at all.
The slit is itself an interference when it is small enough. The wave characteristic can’t be dispensed with. Apparently the photon can’t pass through a slit narrower than Lambda/6. If I aim a photon at a small slit, it is the same result I get if I aim a handful of porridge at the two slits. Climate science is like my criss-crossing photons and my being able to see better with pinhole glasses. It is not like the narrow slits. I’ve thought for a long time that we rejoiced about this prematurely at the time and then never gave it more thought.
75. In the double slit experiment, each photon goes through a single slit. It is the outcomes of the events that are unpredictable. That they are unpredictable is a consequence of missing information. The outcomes of climatological events are similarly unpredictable as evidenced, for example, by the “pause.”
76. Terry Oldberg says:
March 12, 2014 at 12:27 pm
Like I said before, I wish WUWT applied some sort of pre-filter to articles published here
– for example, running them past Roy Spencer
– so he can chuck out the obvious nonsense, and avoid confusing people such as yourself
To me, this article has as much merit as the insane ramblings of Doug J. Cotton that Anthony so despises.
77. We advance from hypothesis (qualitative argument) to theory (quantitative prediction) by the construction of mathematical models. Experiment or observation is the test against which we determine the adequacy of the theory, or of the hypothesis. So, there’s nothing wrong with modeling. But there is everything wrong with dishonesty.
A point to consider is that when a theory premised on random processes produces results that are identical with experiment and observation, it is tantamount to proof that the process is random—because any other process would necessarily have different thermodynamic characteristics. (A random process conforms to maximum entropy.) The kinetic theory of gases is a good example of such a profoundly confirmed theory.
Finally, in my view, if God wanted to institute processes in the world that can run untended forever, random processes would be the ones to create. Strangely, if you are dealing with a random process (for huge ensembles of entities), you know exactly what it is going to do and can turn your back on it while occupied with other matters. I consider the random process to be “God’s autopilot.” It is a marvel of simplicity.
78. eyesonu says: @ March 12, 2014 at 12:33 am
Hmmm, photons are funny quantum mechanical things…. In short, the reality of quantum is so bizarre that we simply have no good way of visualising a photon.
That was the reason why us poor chemistry students called it Science Fiction Physics….
79. Son of Mulder, Those jets streams don’t make the climate of Europe less predictable, what you have experienced is normal climate, you should consider whether similar variability has happened before in the last 60 to 120 years to have a good sampling of European climate. Despite this variability, Europe is reliably warmer than other regions of the earth at similar latitudes, due to the heat transported by the Gulf Stream. It takes most of a lifetime to experience a location’s climate, however, it seems that humans are good at forgetting the “unusual” weather they are experiencing, had been very much the same just a decade or two before.
80. Martin Lewitt
I agree “Those jets streams don’t make the climate of Europe less predictable”, but what they do is indicate basic unpredictability as they deterministically but noncomputably move around long term. Is the Gulf Stream an attractor? What about continental drift? Or affects of the moon, planets…??? How do they all stack up against CO2 growth, cloud variability…? All sorts of timescales, lags, interactions, resonances.
Where to begin to achieve good computer based modelling that predicts reality? Beats me.
81. I found this piece very interesting.
I was reminded of evolution. As a lay person, I have the idea that both genetic mutation and genetic drift are so to speak constantly occurring. A species changes in response to stimuli or environment, but it doesn’t change (significantly) unless one or more mutations (which may be random?) survive by becoming more successful than the original.
“Of all the reptiles alive today, crocodiles and alligators may be the least changed from their prehistoric ancestors of the late Cretaceous period, over 65 million years ago.” Many species, on the other hand, including modern humans, are brand new by comparison.
Does a specific environmental factor actually cause a specific mutation or evolution, or does it (at most) change the balance of probabilities, so that the outcome as to which species survives and which does not could not be predicted even if a computer could be programmed with all the relevant data?
82. son of mulder:
Re: Your question of where to begin to achieve good computer based modelling that predicts reality.
A model is a procedure for making inferences. In building a model, the builder selects from many candidates for being made those inferences that will be made by the model. Currently, climatologists make this selection through the use of the intuitive rules of thumb that I’ll call “heuristics.” However, in each instance in which a particular heuristic selects a particular inference a different heuristic selects a different inference. In this way, the method of heuristics violates the law of non-contradiction. Non-contradiction is among the classical laws of thought.
To violate a classical law of thought is an unpromising method for selection of the inferences that will be made by a climatological model. Fortunately, there is an alternative. It can be proven that an inference has a unique measure. The measure of an inference is the missing information in it for a deductive conclusion per event, the so-called “entropy.” In view of the existence and uniqueness of the measure of an inference, the question of how to select the inferences is solved without violation of non-contradiction by a kind of optimization in which the entropy is minimized or maximized under constraints expressing the available information. This approach has been tried over a period of more than half a century and found to work as expected. Products of this approach include modus ponens, modus tollens, thermodynamics, the modern theory of communication and a number of different mid- to long-range weather forecasting models. All of these products have been extensively tested against real world outcomes without being falsified by the evidence..
83. Highly disingenuous.
What quantum physics says is yes, there are possible different outcomes for the same set of experiments.
What is ALSO says is that averaged out to create a macro world the chances of any of them being radically different are essentially zero.
Computers which utilise quantum effects in the semiconductors, do not routinely porduce different answers to the same program
And systems that are so finely balanced that a butterflies wing can send them one way or another, are inherently states that do not last long. Even chaos has its attractors.
In fact the stability of the earths climate is one of the most significant things that makes catastrophic AGW unlikely to be a real effect: If it were so it would have happened many times before.
The problem with AGW is not a problem with the modelling process per se, its a problem with the amazingly cride and simplistic models that the IPCC relies upon.
84. Models are averaged because the average is closer to observations than any single model.
Clearly that is not the case. The IPCC spaghetti graph shows some models that track much closer to observations than the ensemble mean. Though this could be simply due to accident.
The ensemble mean makes sense when the model error is randomly distributed around the mean. Some high, some low. Over a large number of samples the highs and lows will average out. This is why no individual investor can outperform the market over the long run.
However, in the case of climate models this is not the case. The models share many assumptions and as a result you cannot consider their error to be random. It should not be expected to average out to zero.
In any case, say the models were independent. Best case, what is it they are telling you? Are they telling you what the future climate will be? No. At the very best they are telling what the most likely climate will be.
And before you jump to the conclusion that this tells you anything worthwhile, consider the ingredients label on a can of “meat”.
ingredients: meat, meat byproducts, sugar, corn syrup, canola oil, sesame oil, water, vinegar.
Now consider what the percentages of each may be:
So, with meat the most likely ingredient, when there are 8 ingredients you may only have 13% meat. Now consider that there are a near infinite number of future climates. What percentage is the most likely climate to be of the total? 1/infinity? How likely is that to match reality?
• @ Nancy Green,
I still think it’s one of the most lucid descriptions of why the models cannot work as I have ever heard, and resonates with many other bits and pieces I’ve yet to articulate to myself, to be able to coherently put them across to an audience. I’m reminded of Godel’s Theorems of Incompleteness—which, oddly, are never mentioned to explain why it’s impossible for a computer, which operates from a program which is essentially a set of mathematical laws—a debased corollary to which might be stated, “A system’s Laws cannot be proven from within that system.” Is it possible to step outside the system? That was a paradox visited by Hofstader, and over 30 years later, I’m still mulling his words. But Godel’s Theorems resonate because to me, it’s a way of restating the Second Law of Thermodynamics; in this case as it applies to the climate models. Simply put, I find myself leaning more in the direction that the models will never be able to predict with the accuracy currently ascribed to them by the IPCC and its adherents, because that would violate the Laws of Thermodynamics, and most of the other arguments are merely differing perspectives of this problem.
Someone up-thread, regrets I don’t have the name to had to give credit, did point out that the models ARE useful, when used correctly—let me state clearly I believe they are being heroically misused by the IPCC, and the Alarmist community, who ascribe to these computer sketches an almost infallability, but certainly skills and fidelity they are far from achieving—and the correct use is that you program in everything you know, or suspect, and then use the outcome as an idiot-meter indicator to bounce against reality. Models can help us understand certain processes. But they cannot predict the future, the Second Law of Thermodynamics implies as much if not outright states it.
A few someone elses seemed to opine that there is such a thing as determinism, and those of us who rely on probabilities in describing the physical work are denying something simply because we cannot define it. This is an odd, circular sort of argument; but ok, looking at that as well, I have to ask them: have you ever seen a sub-atomic particle? Can you put calipers on a Lepton?
There are things which defy accurate description because they’re intangible in and of themselves, but only as large enough groups. Once we get beyond a very small number (two) of anything, predicting their future interactions becomes impossible—this was recognized when Newton’s Laws of Motion were considered proof of determinism! We have proven that Newton’s Laws are a subset, a special circumstance, of Einstein’s Relativistic Theories. I have often thought of the use of statistics as nothing more than a tool. To say we can describe how something behaves, even if that behavior is NOT probabilistic, is not to say we know that thing—it means we know it’s effects. For example, electricity, or any electromagnetic radiation: we use it all the time, we measure it’s effects, but we still cannot say for a fact that an electron is a particle or a wave. Depending upon my perspective, and the math I wish to use, and the experiment, I may describe it was either. And since Niels Bohr, both have been considered equally valid.
Yet, as with Feynman’s Sum over Histories, and his Quantum Electrodynamics, statistics is proven—and never disproven yet—completely accurate in describing what we actually perceive. It’s very very difficult—foolish, in my view—to argue that statistics, probability, is not a very effective description for what we term quantum reality.
Which is not to say that we know what quantum reality is! It may be, some day, that someone finally DOES come up with THE Grand unified Field Theory of Gravity. I rather think not, because of the Second Law, but to be unhappy that statistics is the best way to describe what we don’t truly understand is muleheaded, in my humble opinion. Folks who argue against the probabilistic nature of describing our observations—especially when it also describes the predicted outcome to the experiments so very accurately—are arguing about the labels, the description, not the substance. None of us who agree that statistics works are saying that God DOES play with Dice! (Well, I’m not. Not for me to say what He does, or to put words into the mouths of others…) Bah. People are so hung up on that one phrase, uttered in frustration! Let it go— We deal with reality as best we may, trying to comprehend what we can’t see, feel, taste or hear at it’s most basic level, and find impossible to measure discretely in the reality we perceive directly because of the n-Body problem.
Which brings me a long way indeed around the block to say again, I believe your essay was simply beautiful, and nothing I’ve read up-thread can contradict any of it.
Again, thank you. It’s like you reached into my head an organized a bunch of thoughts that have been bouncing around randomly for some time now (and if you could do that for a bunch more….!)
85. “…nature is free to choose C, B, or D, and in the real world nature has chosen D. As a result the models are diverging from reality.”
We could consider nature to be a gigantic analog computer, constantly recalculating a new state based on the current state and it’s set of rules and thus it will arrive at some state 100 years from today. If we knew all of natures’ rules and had a perfect model simulation, it would take only one significant change, such as an under sea volcano partially blocking a key ocean current, and the real world would diverge from the perfect model and the perfect simulation. This is why models are never going tell you the correct answer.
86. Nice essay, thanks.
IMHO, models are re-presenting something that already exists . If you drive your model… good chance you are going to crash.
Thanks for the interesting articles and comments.
87. This article convolutes Quantum Mechanics and Chaos Theory and completely ignores the fact that despite the limitations of these two concepts, many scientific models of systems simpler than climate, successfully predict a great many phenomena to a high degree of accuracy.
It is true that some complex systems (like weather) behave so as to have a high dependence on initial conditions, such that the accuracy of future predictions is limited by the accuracy of the initial model inputs (an essential tenet of chaos theory). It is also true that the uncertainty principle of quantum mechanics may place fundamental limitations on the ability to measure an initial state, ultimately limiting a model’s predictive capability for such complex systems. However, that does not make the modeling endeavor futile. Rather, it simply means that modelers must use skill in deciding what parameters can be predicted, durations over which those predictions may be accurate, and how those results are interpreted.
There are many examples of successful models, from regional weather models to models of planetary and satellite orbits. I, personally, have used computational fluid dynamics models to optimize processes for the semiconductor industry and I’ve worked closely with those using diffusion, device, and even reliability models to predict short- and long- term performance of semiconductor devices. Many properties of turbulent flow, a chaotic process, are readily modeled, as witnessed by the vast improvements we have seen in modern aircraft, automobile, and boat designs.
Even climate modeling is a worthwhile endeavor, so long as one acknowledges the limitations of our current capabilities. I am baffled by the IPCC and other advocates, who insist on the viability of long-range climate predictions, when the models are known to be lacking in such an important phenomenon as cloud formation and I’m sure many other critical factors (I am not a climate scientist). Nevertheless, I’m also sure that there are some skilled modelers in the climate science community using even these imperfect models to understand the interactions between geography, ocean currents, jet streams, and the sun and their effects on climate. Ultimately, it is this understanding that will also lead to an understanding of what phenomena are lacking and how the models and our understanding of climate can be improved in the future.
• SemiChemE:
Caution: Under official IPCC terminology, those squiggly lines on a plot of the global temperature vs time are not “predictions.” They are “projections.” Predictions are essential to the control of a system. Projections are useless.
88. There have been a number of comments questioning the utility of using a classical quantum mechanical experiment (the double slit experiment) as a metaphor for a complex dynamical system, e.g. the Earth’s climate.
Implicitly, Nancy views the time evolution of the Earth’s climate as a stochastic diffusion process. By viewing climate in this way, quantum mechanics immediately becomes an analogue for climate – not because quantum mechanics and diffusion are similar physical processes but, rather, because the underlying mathematical description of the two processes is essentially the same.
On the one hand, diffusion processes are described by, well, a diffusion equation – a second order differential equation involving ‘spatial’ and ‘time’ derivatives. On the other hand, the time evolution of a quantum mechanical is described by the Schrödinger equation – essentially a diffusion equation, albeit one with complex coefficients. Just as there is path integral formulation of quantum mechanics (think double slit experiment in this context), there is a corresponding path integral formulation of classical Brownian motion. The underlying physics of the two processes is vastly different, but the mathematics used to describe them is essentially the same.
For me, the real issue raised by Nancy’s essay is the extent to which the time evolution of the climate system can be modelled as a (possibly bounded) stochastic diffusion process. If it can, then Nancy’s observations stand.
Point well taken, but I would say it is philosophical-type claim, not religious-type.
Of course, the opposite is also true that “underneath it all, the universe is a dice game” is also a philosophical-type claim. It is neither provable, nor refutable. Arguing that “… there is no way to determine at Point A which slit (path) the photons will choose” proves the point, is flawed logic. It is argument from ignorance. Just because WE don’t know how to predict it, doesn’t mean that it is a random choice.
While we are talking about philosophical questions, either claim, “deterministic universe” or “random universe”, prohibits “free will” both in the religious sense and in common meaning of the word – I have a choice where I’ll eat lunch. I cannot prove it, but I believe that what I just wrote was not predetermined for me, nor was it just a random outcome of some carp shoot. (Maybe others would favor the latter over the former. :)
90. Nancy
Flip a coin 100 times. It’s virtually impossible to predict the outcome in the correct sequence. But you can calculate the outcome should be approximately 50 heads and 50 tails. If you actually try to do this experiment, you will get mixed results. Sometimes you get more heads. Sometimes more tails. Sometimes equal.
This is like climate predictions. If you get more heads, it’s warming in the next 100 years. If more tails, cooling in 100 years. Though we can calculate the probabilities, you cannot accurately predict the outcome – warming or cooling, much less the correct sequence. Of course it’s possible to predict it correctly by sheer luck.
• Dr. Strangelove:
Associated with a sequence of coin flips is a pair of frequencies e.g. 50 heads and 50 tails. A coin flip is an example of an event. The frequency of “heads” is the count of events for which “heads” was the outcome. The frequency of “tails” is the count of events for which “tails” was the outcome. For the IPCC climate models, there are no events, frequencies or relative frequencies.
A “prediction” is a proposition regarding the numerical values of the relative frequencies of the outcomes of events. As for the IPCC climate models there are no relative frequencies, there can be no predictions from them. In the official parlance of the IPCC, these models produce “projections” rather than “predictions.”
One validates or falsifies a predictive model by comparing the predicted to the observed relative frequencies of the outcomes of events. For an IPCC climate model there are neither predicted nor observed relative frequencies. Thus, such a model can be neither validated nor falsified. It can, however, be “evaluated.” In an IPCC-style “evaluation,” model “projections” to the global temperature are compared to a selected global temperature time series. An “evaluation” is, however, logically and scientifically worthless.
91. I’m reminded of Godel’s Theorems of Incompleteness—which, oddly, are never mentioned to explain why it’s impossible for a computer, which operates from a program which is essentially a set of mathematical laws, cannot predict the future.
Sorry. As I said, a muddle of observations, thoughts, etc. that I haven’t formed into a theory coherent enough to put across yet. I DID say so!
92. Retired Engineer John:
Re: “…models are never going to tell you the correct answer.”
A trick of the model building trade makes it possibility for a model to give the correct answer to a question not withstanding details like the possibility of an under sea volcano. The trick is to abstract (remove) the descriptions of things from selected details. Thus, for example, the description of the macrostate of a thermodynamic system gives a correct answer to the question of the state of this system.
93. Terry, what I was saying was you could have a perfect model based on the climate as it existed at the start of the model forecast, but the physical plant of the Earth can change and your perfect model will no longer be correct. In other words, the Earth changes after the model forecast, change the real world and the model cannot track those changes as they are not predictable and are not included. The longer the time period the greater the probability. The Earth is too chaotic to predict for extended periods of time.
94. I have a choice where I’ll eat lunch
If you live in a deterministic universe then the prior state of the universe determines where you will eat lunch. You have no say in the matter, as it has already been decided long before you were born.
Much the same for me. I woke up Sunday morning with an inspiration. A dream of how to connect the dots. Spent the day madly scribbling, fired it off to Anthony before the dream could fade. Very pleasantly surprised to see the paper in print and the many kind comments.
Your comments on the b-Body problem are especially interesting. I had wanted to include this in the paper, as I do see it as part of the whole. It ties the microscopic to the macroscopic, relating the number of attractors in a chaotic system to the number of bodies in orbit, with a similar computational future.
Contrary to what has been written, Einstein’s ideas were not readily accepted. Rather, Relativity has become accepted because its predictions have proven correct, time and time again. And lets face it, Relativity defies common sense. Time dilation, length contraction, how is any of this possible? Where is the mechanism?
Perhaps the single greatest practical example of this is GPS. When the system was launched the time correction for Relativity was not turned on. There was still considerable doubt that is was required. However, when the system proved inaccurate the Relativity correction was enabled and the rest, as they say, is history.
Perhaps this will serve as inspiration to you the reader. Seize an idea and put it to paper. You are at least as likely to be correct as the Climate Models.
96. ” Intuition tells me that determinism is incompatible with free will.”
The bane of determinism is emergent properties. Perhaps QM is a slew of these properties, yet the argument goes full circle when by merely perceiving something you have created it. Can we then create reality by free will?
Seems like what the modelers are trying to do.
97. Edward Lorenz,the meteorologist and mathematician,discovered at MIT in 1961 that the equations of meteorology have chaotic solutions .As Freeman Dyson notes, in “A Many Coloured Glass: Reflections on the place of Life in the Universe,” John Von Neumann in the 1940s had invented coded software for computers and believed that through simulating the fluid dynamics of the atmosphere it would be possible to both predict and control climate. ” If the situation is stable, we can predict what will happen next.If the situation is unstable , we can apply a small perturbation to control what will happen next…..So we shall be masters of the weather.Whatever we cannot control, we shall predict , and whatever we cannot predict ,we shall control”, wrote Von Neumann.
Dyson comments, ” Von Neumann of course was wrong. He was a great mathematician but a very poor predictor of the future……Von Neumann was wrong because he did not know about chaos.He imagined that if a situation was unstable , he could always apply a small perturbation to move it into a situation that was stable and therefore predictable. In fact this is not true. Most of the time,when the atmosphere is unstable,the motion is chaotic,which means that any small perturbation will only move it into another unstable situation which is equally unpredictable. When the motion is chaotic ,it can neither be predicted or controlled.So Von Neumann’s dream was an illusion.”…..
98. Terry
My example applies to the chaotic climate system, not the IPCC models. The models are just a curve fitting exercise and wishful thinking that if we can fit a curve, we can predict the future. If that were true, there would be many billionaires from predicting the financial markets.
99. @dolan
Godel’s theorems and the 2nd law of thermodynamics are not the same. The theorems are about the completeness and consistency of logical systems. The 2nd law is a statistical description of heat flow based on kinetic molecular theory. The uncertainty in thermodynamics is inherent in the physical system. Nothing much you can do about it. The uncertainty in logical systems is due to the limitation of a particular logical system. You can devise a better system. For example Godel’s theorems do not apply to Peano arithmetic whose completeness and consistency are provable.
Newtonian mechanics and relativity theory are both deterministic. Quantum mechanics is not. That’s why some physicists say relativity is a “classical” theory. Well, kinetic molecular theory is a classical theory but not deterministic, at least not in the micro level.
“I have a choice where I’ll eat lunch”
Sure you have free will. We cannot predict the movements of all the neurotransmitters in your brain. But if I drop a bowling ball, it has no choice but to obey gravity. Its motion is deterministic. The debate whether the universe is deterministic or random is like debating whether the glass is half full or half empty.
• @ Dr. Strangelove,
I know well what Godel’s theorems are, as well as the Second Law of Thermodynamics, and did not ever say they were the same. Please note that you are not correct about what the Second Law of Thermodynamics is,either. What I was saying (or trying to) is that Godel is useful for predicting why a closed system like a computer program cannot predict the future. As I also said, I see a connection between them and the limitations that the Second Law puts on the system we refer to as reality.
I see that you too are hung up on “deterministic”. Yes, Relativity is considered “classical” mechanics. Is that a way of “dismissing” it? Does that make it “incorrect”? Flawed, somehow? I think you are too dismissive while tossing discriptions around and not explaining yourself—at least, not that I can follow. Stating that “uncertainty” is part of the system of Thermodynamics is non-sequitor. Not sure how you mean that regarding the Second Law…?
Feel free to disagree with me, and if you can put it into words, and have time, please explain why (and if you can’t put it into words yet, that’s fine too—it’s a complex subject. I’ve been thinking about Godel and what I see as a connection between the implications of his theorems and the Second Law and what it says about entropy on and off for decades now, and still can’t put it into words). But please don’t bother to lecture me about the meaning of either the Law or Theorem. I don’t need it, and it adds nothing to the discussion.
Also, I believe Edward Nelson of Princeton proved Peano Arithematic to be inconsistent a few years ago.
100. @ Nancy Green
March 12, 2014 at 9:46 pm
You said: “I have a choice where I’ll eat lunch
Both are true. I believe in Free Will in the sense that our thoughts influence our actions, and our thoughts influence those thoughts which influence our actions. Our tastes, experiences, beliefs and values, habits and judgements, genetics, emotions, sobriety, perceptions and errors influence the thoughts that influence the thoughts that influence the actions.
But with all my Free Will, I cannot do as a parrot does, and flap my wings as a result of my free choice. The prior state of the universe over millions of years has given me no say in the matter; it was already determined long before I was born, over the course of hominid evolution.
101. “But as it turns out, with our present level of understanding, God does play dice”
On the contrary – it proves that HUMANs ARE NOT GOD and cannot know everything. Heisenberg’s Uncertainty Principle states that humans cannot measure both speed and position of a sub-atomic particle at the same time.
It says nothing abut God.
102. We cannot predict the movements of all the neurotransmitters in your brain
Our inability to predict an event is limited by our knowledge. Determinism has no such limitation. If the movements of all the neurotransmitters in your brain can be fully determined by physical laws from the some previous state, then you are not free to chose. The choice was made by the previous state.
103. This has been an interesting and curious thread.
Consider this. I am lost on the ocean or in a rural area. It is dark, overcast, and I have no navigation equipment. Assume for the moment that the setting is a rural roadway and I have reached a T intersection. I ask my passenger which way to go. She says: don’t ask me, I have no idea. Neither do I so I flip a coin and it turns out to be the right way home. Is this deterministic or random (i.e. 50/50 chance)?
On another note. While I have absolutely no idea if it is true, I could present quite valid arguments that a photon travels in a spiral/corkscrew trajectory. I have no reason to believe it but I could rationally argue the point with only a couple of assumptions and one being that the trajectory is similar to a corkscrew.
Only on a thread like this would I offer the above statement. I look forward to another thread in the future which I can offer such unproven but supportable claims that I would make in the world that is not the one we know now. Photons travel in a spiral trajectory and are maintained and driven by their own own gravitational (?) and/or other force. It will be interesting, but it could be a bunch of BS. If you think about this you may lose sleep.
104. Col Mosby says:
March 11, 2014 at 8:28 pm
That can be improved upon. The problem there is that most people are somewhat lost within themselves, and/or have never tried to peer inside to understand that which lies within. Maybe they do for a short time when young and full of interest to learn and experiment, but almost all quickly give up on the inner search and focus everything on that which lies outside. I know this because I kept on searching within, and I remember how most others simply wandered away to pursue the glittering objects that abound in this world and will distract and amuse. I have had friends who were involved in that search ask me many years later “Did that really happen, Did we really make those connections?”. My brother even responded to me in this fashion. Yet, when I related the story as I remembered it and detailed the circumstances, I could see a light spark on within him and he would get a glimmer of the old memory within himself.
Regarding models, this reminds me a bit of what I had to do to be successful at football betting. Although I also used a similar exercise for other tasks. I started betting football on the spur of the moment. Starting out, I knew very little about key issues that a normal handicapper would consider in his decision making process. I formed my own method to assess the most likely outcome, and I soon had success. I found that it was very important to develop a strong formula for discarding the least likely or poorly understood propositions. Thinking of how I accomplished that and the benefits derived from doing so, the IPCC method of using models to glean understanding seems absolutely absurd. They should have been weeding out the weaker models from early on, but they also had the very serious problem of confirmation bias right from the beginning. In a betting system, confirmation bias is a dead end. At the peak of my skill in handicapping my own brother actually asked me if I could foresee the future. He couldn’t believe that I could make so many right decisions, because he knew how little I knew about the teams. The crowd at Reno, Nevada was also impressed. I had to take to sitting with my back to a wall when I did my homework, to keep people from trying to look over my shoulder. From my handicapping experience in particular, I have a partial understanding of the modeling problems.
its luck!
106. Nancy
Yes the present state is determined by previous state. That’s how your brain makes a choice. If there is no neurotransmitter near the neurons, there is no activity in those neurons. If this happens to a large part of your brain, you can’t make a decision. Probably you can’t even breathe or your heart will stop since all these are regulated by the brain.
We can argue philosophically about free will. But neurologists are pretty sure brain activities obey the known laws of physics and chemistry. Some even argue free will is an illusion since experiments show some actions happen faster than the rational processing speed of the brain. Like tennis players hit the ball faster than their brain can think about how to hit the ball.
The motion of the coin is deterministic. It obeys Newtonian mechanics. That it is the right way home is random. It obeys probability theory. P1 = 0.5 the right way. P2 = 0.5 the wrong way
107. Godel is useful for predicting why a closed system like a computer program cannot predict the future
Murphy’s Laws of Computer Programming
Any non-trivial program contains at least one bug.
• @ Nancy Green:
Murphy is more useful than Determinism.
Dr. Strangelove says:
March 13, 2014 at 8:03 pm
“Godel is useful for predicting why a closed system like a computer program cannot predict the future”
Computer programs predict the flights of spacecrafts. If Godel was a problem, the Apollo astronauts would be dead.
Sir, I beg to differ from you: the programs did not predict anything. They calculated a solution to a complex problem, and at the end, the men in control of the command module and lunar module actually flew the craft to achieve their final orbit and landings, respectively, fine-tuning the results of the caculations. No computer program at the time could’ve done what the astronauts completed, by themselves, and again, computers DIDN’T. Engineers armed with slide rules did most of it.
Nancy, again, beautiful, beautiful, beautiful. Not gonna bring world peace or convince Alarmists, but you’ve made me happy, if that means anything, and with your permission, will use your description to teach others.
Dr. Strangelove, I don’t wish to trade inconclusives or snippets of what anyone with google can collect and display, to achieve no further learning or understading. I get the impression that if i say “Fire engines are red,” you will tell me what percentage are actually green.
I consider these last exchanges futile examples of blurb-parsing with no intelligible content worth the effort. You have your opinions. I have mine. I’m happy, on this subject, to leave it at that.
108. Heisenberg’s Uncertainty Principle … It says nothing abut God.
Bell’s Theorem answers the question of God playing dice. It establishes that local hidden variables are inconsistent with observation, at our present level of understanding.
109. neurologists are pretty sure brain activities obey the known laws of physics and chemistry
These laws do not establish determinism. Quite the opposite. My point remains. Free will is inconsistent with determinism, because your decisions would depend on the state of the universe before you were born.
Since in our earlier article we established that your “free will” could not be distinguished from the actions of a particle, it could well be that free will is an illusion of mind.
110. Dolan
Don’t be offended. This is not a lecture. I don’t do that that’s why I don’t bother to explain in great lengths. I would rather you do your own self-study. BTW I’m referring to Boltzmann’s formulation of the 2nd law. If you’re thinking of Clausius or Kelvin’s formulation, I see why you disagree.
I’m afraid you misunderstood Godel’s theorems. The fact that you’re saying Edward Nelson proved Peano arithmetic to be inconsistent is actually the proof that Godel’s theorems do not apply to Peano arithmetic. The theorems apply to logical systems whose completeness and/or consistency are unprovable. BTW majority of mathematicians accept Gentzen’s proof that Peano’s axioms are consistent. But that’s beside the point.
I do not dismiss relativity theory. It is correct. I merely pointed out that it’s deterministic.
111. “Godel is useful for predicting why a closed system like a computer program cannot predict the future”
This is physically impossible because your brain did not exist before you were born. Present state depends on previous state but not previous states 13 billion years ago.
112. In case the significant of hidden variables is unclear. I hold out my hands with a coin in one. Based on which hand contains the coin, I will take path 1 (left) or path 2 (right) to the future. Your task is to find out which hand contains the coin, so that you can predict which path I will take and thus predict the future.
I known the hidden variable (which hand contains the coin) so I can predict my actions deterministically, but since you don’t know the hidden variable your must predict my actions probabilistically.
Bell’s Theorem establishes that there is no hidden variable to tell us which path will be chosen to the future. Thus, our common sense belief that there is only one possible future which is determined by the present and physical laws is false. Therefore the future cannot be determined, except of the basis of probabilities.
113. Dolan
Don’t be offended. I don’t lecture. I would rather you do your own self-study. BTW I’m referring to Boltzmann’s formulation of the 2nd law. If you’re thinking of Clausius or Kelvin’s formulation, I see why you disagree.
114. This is physically impossible because your brain did not exist before you were born.
But in a deterministic universe, the creation of my brain was a result of the physical state of the universe prior to my conception and the physical laws that determined my growth. This would extend back to the beginning of this universe, back to the universe that gave birth to this one, and so on and so on.
115. Present state depends on previous state but not previous states 13 billion years ago.
You are arguing that somewhere between the present and 13 billion years ago, the universe was not deterministic.
116. Computer programs predict the flights of spacecrafts.
The first space shuttle launch aborted at T-13 seconds. For reasons of safety the shuttle had 5 main computers. 4 to calculate using identical programs and one to compare the results. At T-13 the 5th computer detected that the 4 other computers were not in agreement and aborted the launch.
We now know that identical computers do not deliver identical results.
Bell’s theorem describes the behavior of subatomic particles. People and planets are not subatomic particles. Astronomers don’t compute the probability the earth will revolve around the sun this year. Maybe it won’t if a giant comet hit it.
Sorry I’m not interested in philosophical debate. Suffice it to say some phenomena are deterministic, others are not. If you want to believe it’s just one or the other, so be it.
118. Dolan
119. Bell’s theorem describes the behavior of subatomic particles. People and planets are not subatomic particles.
As demonstrated earlier, your “free will” cannot be distinguished from the actions of a particle. The orbits of the planets are computationally intractable outside of quantum mechanics due to the n-body problem. The exception is the restricted n-body problem, when the planets all lie in the same plane.
Philosophy deals with “Why” something happens. Thus a PhD is a Doctor of Philosophy in her chosen field. Much more interesting and practical are the other 4 W’s. Who, What, When, Where. “Why” can never be answered fully. It is cloaked in the infinity of the universe. Zen provides as good an answer to “Why” as does Science.
120. I do not dismiss relativity theory.
Relativity was a giant step forward, combining the work of Newton and Maxwell. The great challenge ahead is to combine Relativity and Quantum Mechanics.
121. People and planets are not subatomic particles.
On the contrary, people and planets are all composed of subatomic particles. These provide the illusion of determinism, because of the law of large numbers. While the odds are good that a single particle might wink in or out of existence, the odds are vanishingly small that all the particles that make up planet earth will do this at the same time. QM tells us it is not impossible. Simply very long odds.
122. I agree with you Nancy. The odds are vanishingly small. That’s why we ignore it. You don’t have to worry that all the atoms in your body will quantum tunnel to a parallel universe.
The present state depends on immediate previous state. Shooting the last snooker ball depends only on the last positions of the two balls. It doesn’t matter how you shot all previous balls. Different snooker games will end the same way so long as the last ball positions are the same.
The computers did the celestial mechanics calculations. The astronauts did the flying with the aid of computers. They already have main frame computers in 1960s. Of course that didn’t prevent engineers with slide rules from doing what they love to do. Sir, contrary to your impression, I really don’t care if fire engines are red or green so long as they can put out fires.
• Re: free will?
The following is a conclusion from modern information theory. The firing of one of Nancy’s neurons is not conditional upon the firing of those neurons which make synapses with its dendrites. It is conditional upon the PATTERNS of firing of these neurons. That firing is conditional upon patterns of firing is what makes it possible for Nancy to learn. In learning, her brain updates the descriptions of those patterns which produce firing.
Learning provides Nancy with information about the unobserved outcomes of events given the observed conditions that preceed them. By placing her physical self in that condition which predicts the outcome which she desires, Nancy exhibits “free will.”
Were the firing of each of her brain’s neurons to be conditional upon the firing of those neurons that make synapses with its dendrites, Nancy could not exhibit free will. However this proposition is false.
123. BTW the argument that “the odds are vanishingly small” is also speculative. Real experiments on the famous Schrodinger’s cat paradox revealed that big objects don’t behave like subatomic particles (I’m not surprised). An electron can be in a state of superposition but a cat can never be simultaneously dead and alive (again I’m not surprised)
124. “Free will is inconsistent with determinism”
Absolutely. Free will is an emergent property of neurons. It cannot be determined by our paltry current understanding.
But is it a quantum property? There is always this issue of scale. Quantum=nano. None of us would argue that the microwaves bouncing off the oxygen molecules telling us the models are out to lunch exist only because we perceive them. Quantum is largely irrelevant at earth scale and human life timeframes. This may be a problem applying quantum/chaos to computer models and climate.
If we were to live our lives by quantum theory, we would all be Post Modern narcissistic nihilists.
125. The universe is completely deterministic. You can calculate any state and the position of any element at any time, past present, or future. All you need are the complete set of governing principles for the universe, which we don’t quite have just yet, and the initial condition, which we haven’t managed to identify so far.
Once we get those details ironed out, the remaining problem is computational speed. For example, if you want to calculate where, say, a photon will next be, you have to work out the solution faster than the photon can get there. Once you’ve solved the problem of computational speed, it’s all really very easy.
Oh wait… I almost forgot… you also have to have a way of identifying everything in the universe. You don’t want to announce you’ve calculated the next position for Fred the Photon when it was actually Alice the Photon that was zipping by. Alice would not be amused.
Meanwhile, we just do a lot of guesstimating.
(Wonderful essay, Nancy Green. Thank you.)
126. From a generalist approach to the subject of global environmental perturbations (human and non humanly driven) I understand that our environment has mechanisms of resilience that get activated at local and global scale. Are we aware of those mechanisms? Temperature in our atmosphere can increase due to a number of factors. Greenhouse gases and solar radiation among others. Just playing, the human body uses temperature to react against pathogens. It rises from its balance state giving fever symptoms and this increase triggers the release of water as sweat to absorb the heat through evaporation. Consequently the human body loses water that needs to be replaced. So, in our global ecosystem, there is a debate about if there has been an increase in heat or temperature. Which would be the mechanisms of resilience in our global environment working to absorb or release those increases in heat or temperature (I would go with water as the heat/energy carrier and the weather systems as the physical mechanics to redistribute and release heat/energy. Like stirring a spoon to cold down your soup).
Now, I feel it is very important to understand the mechanisms of resilience at global scale. Which are they? Are they working properly? I don´t think the mechanisms of resilience against increases in temperature due to solar radiation are the same as increases in temperature due to greenhouse gases. Should not reflect such events the ionic charge of the atmosphere? And, would not they be more localised in time (start to finish) than constant heating from inside? (honestly curious) .
So, models can only work with non sporadic events opposite to solar radiation. So, from an anthropogenic point of view, “What about if the global ecosystem has mechanisms of resilience to absorb increases in temperature that makes our correlations weak in time?? Are these mechanisms of resilience being incorporated in our predictive models? The most provably repercussion from activating mechanisms of resilience would be to see cyclic patterns of change. Meaning, e.g. temperature raises, weather patterns would increase performance releasing energy until the atmosphere recovers to a point where to start again. However, the point of starting again might be each time different since the global ecosystem would adapt to the pressure of dominant increasing patterns of temperature. So each time the cycle would start at a higher temperature, inducing an adaptation to the biota to new conditions until the system would adapt to not feeling the perturbation or… the mechanism of resilience does not give a continuous predictable process. One mechanism activates another mechanism due to synergistic effects and so on. So, with each new mechanism activated a new model to be defined. I am not sure about if this is contemplated.
I believe that resilience is playing a mayor role not only in our environment and our understanding when modelling changes in our ecosystem but also in the mindset applied in the debate.
From an environmental point of view I understand that any ecosystem has a limited capacity to absorb perturbations. So, from an hypothetical approach to the subject on human impact versus environmental change I would like to see a case scenario study giving answer to three questions: Could human development have an impact in the ecosystem at global scale? What would have to do humans to alter the ecosystem at global scale? Which part of the ecosystem (soil, atmosphere, light and heat (from our sun), water or living organisms) would reflect primary the impact from human perturbation? In case the answer is “yes” to the first question, how much of the answer for the second and third questions matches with actual facts?
127. Nancy Green says, “If you live in a deterministic universe then the prior state of the universe determines where you will eat lunch.”
Agreed, which makes the deterministic universe idea uncomfortable.
However, if you live in probabilistic universe, you have no choice either. The place you eat is decided by some grand dice roll, which is an equally uncomforting idea.
A probabilistic universe is equally inconsistent with free will.
128. “A probabilistic universe is equally inconsistent with free will.”
Very cool. Probabilistic determinism, but emergent properties are wildly improbable. How would you set the odds that consciousness would emerge from neurons when they first evolved in metazoans?
Improbability is the freedom Nancy is talking about. She yearns to find it in chaos, but it clearly arises spontaneously in nature.
129. A probabilistic universe is equally inconsistent with free will.
In a deterministic universe there is only 1 future for a given present. There is nothing to choose, because your choice itself is fully determined by physical laws and initial state. What you think is free will is simply an illusion, because your actions are fully determined by your past.
In a probabilistic universe there are many futures possible for a given present. The option exists to choose one of them, in a fashion similar to “Lets make a deal”. You can choose door 1, 2 or 3. But you don’t get to see what is behind the door until after you make your choice.
130. Another way to think about it is “the future is written”. In a deterministic universe this statement is true, because the future would be fully determined by the present. In a probabilistic universe, the future is not written until you arrive. Until that point it is simply one possibility out of many.
131. Nancy,
Thank you for the expansion on your ideas. I am learning from you. What I think now is that you believe the following:
1) A probabilistic universe allows for free will, where as deterministic universe world does not.
2) The double-slit (or some other) experiment proves that we live in probabilistic universe.
I would like to ask you to expand on this, not because I believe either is untrue, but because I truly don’t understand the proof. For the record, I also believe that we have free will, and I don’t argue that we don’t live in probabilistic universe. I am simply an agnostic on the latter.
If you have a proof, I can learn from it and would be grateful if you would share it with me. I have an undergraduate degree in physics and a master’s in geophysics, so I should be able to follow your explanation. I am teachable.
There is no need to read further. Here after, I just explain why I am agnostic on the concept of probabilistic universe and why a probabilistic universe does not necessary imply free will.
You write: “In a probabilistic universe there are many futures possible for a given present. The option exists to choose one of them…”. The first sentence, of course, I agree with, but I don’t see any “option”. I assume that you are not saying that we can, through force of will, choose which slot the photon passes through. If that is in our power, one could just as easily, through force of will, alter predetermined outcomes. Either ability is a supernatural power. Why would someone prefer one over the other as an explanation of free will?
Your essay suggests that the macroscopic world we live in is determined by microscopic events. This is a proposition that I would agree with. Hence, not only do microscopic events determine the options that are available to us; they must also determine our decisions. Gymnosperm described my argument as “Probabilistic determinism”. At first, I thought no, but speaking only to the free will issue, yes, one is just swapping one form of determinism for another. If our decision process is determined on a probabilistic microscopic scale, then the dice determine our decisions and our future.
If you can clear up my confusion on that matter, I would be greatly appreciative.
With regards to the issue of the proof of a probabilistic universe, let’s consider the double-slit experiment. At first, flooding the double-slits with light results in a very reproducible diffraction pattern. However, reducing the light intensity and examining closely, we see photons passing through one slot or another in an unpredictable matter. We conclude that an identical experiment yields different results.
However, is it really identical? Let’s look at the “point source”. It consists of millions of atoms/molecules. The probability that any two photos were emitted from the same atom is extremely small.
OK, I’ll do thought experiment with you to resolve the problem. We shall assume that we can isolate a hydrogen atom at zero degrees K (to eliminate thermal motion) and further, we can get the atom to emit photos one after another without disturbing its position. Now from symmetry we can reason that it is equally likely to emit a photon with any trajectory. Therefore, we will only look at photons with identical trajectories, would they pass through the same slot? I’m guessing here, but I would have to say, yes. But now, you say, wait, it is that the atom emits in all direction with equal probability which proves a probabilistic universe.
OK, then I’ll revert to analogy to illustrate my thinking, not to prove a point. This is similar to me arguing that when I miss a golf shot that proves a probabilistic universe. (I think that I’ll use that the next time I miss a putt. Those little windmills ruin my game.)
Back to proof, that we cannot control or predict which trajectory a photon takes, does not prove that is not responding to some physical law in a reproducible manner. I freely admit that is a possibility, but where is the proof?
132. Re: free will
Free will is a property of entities that are “… guided by cyclical patterns of information flow known as feedback loops.” These are open systems (systems that exchange matter and energy with their environment) that “….maintain and develop structure by breaking down other structures in the process of metabolism, thus creating entropy – disorder – which is subsequently dissipated in the form of degraded waste products.” ( ). The system of which these entities are a part gains entropy but the entities themselves lose entropy. In doing so, they organize themselves.
133. I agree with, but I don’t see any “option”.
In a deterministic universe there is only one future, as determined by current state and physical laws. You have no choice.
In a probabilistic universe there are many futures. The problem you are having is how to select one over the other. How does one make a choice of one future over the other?
I know of no way to do so. You cannot select which path the photon will choose. This is making you feel uncomfortable, because you cannot see the utility.
However, you are asking the wrong question. How can there be any choice if there is only 1 future? With infinite possible futures there is the potential for choice. However, that does not mean that choice is guaranteed.
A better way to think of this is in terms of opportunity. In the probabilistic future all things are possible. In the deterministic future, the future is already written.
While this does not guarantee that you have free will, the double slit experiment (along with Bell’s Theorem) holds forth the possibility that you do have free will. While the deterministic universe provides no opportunity for choice.
For those with a religious background, free will was one of God’s promises to mankind. So, in this respect it appears that religion and science complement each other. That is perhaps one of the greatest accomplishments of quantum mechanics, to provide a first step to reconcile religion and science.
134. Alex says:
March 12, 2014 at 3:14 am
This may be answered already, didn’t get that far.
You must have forgot that photons come in different sizes, and they all exhibit this behavior. You can emit a photon from an antenna, and have an exact location for A, and an exact location for B. BTW you can also design your slit to an exact proportion of the wavelength you’re working with in this case.
135. Momentum is not position.
Phase space combines position, momentum and time.
136. ferdberple says:
Thank you for your response.
I will answer your question. There can be no choice (no free will) if there is only one future.
I am not arguing that we live in a deterministic universe. (I am an agnostic on that.) I am trying to understand two premises:
1) The double-slit experiment proves that we live in probabilistic universe.
2) A probabilistic universe implies that we have free will.
I am not trying to “win” an argument. I am just trying to understand. I believe the following is what you are saying in syllogistic form:
Major premise: We live in a probabilistic universe.
Minor premise: All futures are possible.
Conclusion: Therefore, we have (the possibility of) free will.
The above is what I believe that you are saying. Here is what I am hearing:
Major premise: We live in a universe where chance decides the outcome of all events.
Minor premise: All futures are possible.
Conclusion: Therefore, we can choose the outcome of some events.
Hence, I believe that this is faulty syllogism. The conclusion violates the major premise.
OK, where am I going astray? Is the syllogism in fact valid? Have I misstated the premises? Have misstated the conclusion?
The subject fascinates me and if you can clear up my confusion, I would be grateful.
137. if you can clear up my confusion
No offense meant, but good luck with that!
To quote Morpheus “Welcome to the real world.”
138. Computers which utilize quantum effects in the semiconductors, do not routinely produce different answers to the same program
Semiconductor devices use quantum effects, but not quantum states. For example Programmable devices use quantum tunneling of a charge across an insulator, but doesn’t use the quantum state of a device. Spintronics and Quantum computers are exploring this space, but are far from everyday electronics, yet.
Comments are closed. |
bffe678569ed64a0 | Monday, October 19, 2009
Intelligent Design - The Anthropic Hypothesis
Isaiah 1:18
"Come now, and let us reason together,",,,
Anthropo - Greek origin from anthropos: man, meaning man, human, as anthropology
In 1610, the Italian scientist Galileo Galilee (1564-1642) verified Polish astronomer Nicolaus Copernicus's (1473-1543) heliocentric theory. The heliocentric theory was hotly debated at the time, for it proposed a revolutionary idea for the 1600's stating all the planets revolved around the sun. Many people of the era had simply presumed everything in the universe revolved around the earth (geocentric theory), since from their limited perspective everything did seem to be revolving around the earth. As well the geocentric theory seems to agree with the religious sensibilities of being made in God's image, though the Bible never actually directly states the earth is the 'center' of the universe.
Job 26:7
“He stretches the north over empty space; He hangs the earth on nothing”
Galileo had improved upon the recently invented telescope. With this improved telescope he observed many strange things about the solar system. This included the phases of Venus as she revolved around the sun and the fact Jupiter had her own satellites (moons) which revolved around her. Thus, Galileo wrote and spoke about what had become obvious to him; the planets do indeed revolve around the sun. It is now commonly believed that man was cast down from his special place in the grand scheme of things, for the Earth beneath his feet no longer appeared to be the 'center of the universe', and indeed the Earth is now commonly believed by many people to be reduced to nothing but an insignificant speck of dust in the vast ocean of space. Yet actually the earth became exalted in the eyes of many people of that era, with its supposed removal from the center of the universe, since centrality in the universe had a very different meaning in those days. A meaning that equated being at the center of the universe with being at the 'bottom' of the universe, or being in the 'cesspool' of the universe.
The Copernican Revolution - March 2010
Excerpt: Danielson(2001) made a compelling case that this portrayal is the opposite of what really happened, i.e., that before the Copernican Revolution, Earth was seen not as being at the center, but rather at the bottom, the cesspool where all filth and corruption fell and accumulated.
Yet contrary to what is popularly believed by many people today, of the earth being nothing but a insignificant speck of dust lost in a vast ocean of space, there is actually a strong case to be made for the earth being central in the universe once again.
In what I consider an absolutely fascinating discovery, 4-dimensional (4D) space-time was created in the Big Bang and continues to 'expand equally in all places':
Where is the centre of the universe?:
Thus from a 3-dimensional (3D) perspective, any particular 3D spot in the universe is to be considered just as 'center of the universe' as any other particular spot in the universe is to be considered 'center of the universe'. This centrality found for any 3D place in the universe is because the universe is a 4D expanding hypersphere, analogous in 3D to the surface of an expanding balloon. All points on the surface are moving away from each other, and every point is central, if that’s where you live.
4-Dimensional Space-Time Of General Relativity - video
So in a holistic sense, as facts revealed later in this paper will bear out, it may now be possible for the earth to, once again, be considered 'central to the universe'. This intriguing possibility, for the earth to once again be considered central, is clearly illustrated by the fact the Cosmic Microwave Background Radiation (CMBR), remaining from the creation of the universe, forms a sphere around the earth.
Earth As The Center Of The Universe - illustrated image
The Known Universe - Dec. 2009 - a very cool video (please note the centrality of the earth in the universe)
This centrality that we observe for ourselves in the universe also happens to give weight to the verses of the Bible that indirectly imply centrality for the earth in the universe:
Psalm 102:19
The LORD looked down from His sanctuary on high, from heaven He viewed the earth,
On top of this '4D expanding hypersphere geometry', and other considerations of Einstein's special theory of relativity that show that the speed of light stays the same, while all other movement in the universe, no matter how fast or slow, is relative to that 'unchanging' speed of light, the primary reason the CMBR forms a sphere around the earth is because the quantum wave collapse of photons to their "uncertain" 3D wave/particle state, is dependent on 'conscious observation' in quantum mechanics. Moreover, this wave collapse of photons, to their 'uncertain' 3D wave/particle state, is shown by experiment to be instantaneous, and is also shown to be without regard to distance. i.e. It is universal for each observer (A. Aspect). CMBR, coupled with quantum mechanics, ultimately indicates that 'quantum information' about all points in the universe is actually available to each 'central observer', in any part of the 4D expanding universe, simultaneously. The primary reason that 'observers' are now to be considered 'central' in the reality of the universe is because of the failure of materialism to explain reality. Here is a clip of a talk in which Alain Aspect talks about the failure of 'local realism', or the failure of materialism, to explain reality:
Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video
The falsification for local realism (materialism) was recently greatly strengthened:
Physicists close two loopholes while violating local realism - November 2010
This following study adds to Alain Aspect's work in Quantum Mechanics and solidly refutes the 'hidden variable' argument that has been used by materialists to try to get around the Theistic implications of the instantaneous 'spooky action at a distance' found in quantum mechanics.
Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009
(of note: hidden variables were postulated to remove the need for 'spooky' forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.)
Quantum Mechanics has now been extended by Anton Zeilinger, and team, to falsify local realism (reductive materialism) without even using quantum entanglement to do it:
‘Quantum Magic’ Without Any ‘Spooky Action at a Distance’ – June 2011
Excerpt: A team of researchers led by Anton Zeilinger at the University of Vienna and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences used a system which does not allow for entanglement, and still found results which cannot be interpreted classically.
Falsification of Local Realism without using Quantum Entanglement - Anton Zeilinger
One of the first, and most enigmatic, questions that arises from people after seeing the Quantum actions that are 'observed' in the infamous double slit experiment is, "What does conscious observation have to do with anything in the experiments of quantum mechanics?" and thus by extrapolation of that question, "What does conscious observation have to do with anything in the universe?" Yet, the seemingly counter-intuitive conclusion that consciousness is to be treated as a separate entity when dealing with quantum mechanics, and thus with the universe, has some very strong clout behind it.
Quantum mind–body problem
Eugene Wigner
Further weight for consciousness to be treated as a separate entity in quantum mechanics, and thus the universe, is also found in the fact that it is impossible to 'geometrically' maintain 3-Dimensional spherical symmetry of the universe, within the sphere of the Cosmic Microwave Background Radiation, for each 3D point of the universe, unless all the 'higher dimensional quantum information waves' actually do collapse to their 'uncertain 3D wave/particle state', universally and instantaneously, for each point of conscious observation in the universe just as the experiments of quantum mechanics are telling us that they do. The 4-D expanding hypersphere of the space-time of general relativity is insufficient to maintain such 3D integrity/symmetry, all by itself, for each different 3D point of observation in the universe. The primary reason for why the 4D space-time, of the 3D universe, is insufficient to maintain 3D symmetry, by itself, is because the universe is shown to have only 10^79 atoms. In other words, it is geometrically impossible to maintain such 3D symmetry of centrality with finite 3D material resources to work with for each 3D point in the universe. Universal quantum wave collapse of photons, to each point of 'conscious observation' in the universe, is the only answer that has adequate sufficiency to explain the 3D centrality we witness for ourselves in this universe.
From a slightly different point of reasoning this following site, through a fairly exhaustive examination of the General Relativity equations themselves, acknowledges the insufficiency of General Relativity to account for the 'completeness' of 4D space-time within the sphere of the CMBR from different points of observation in the universe.
The Cauchy Problem In General Relativity - Igor Rodnianski
Excerpt: 2.2 Large Data Problem In General Relativity - While the result of Choquet-Bruhat and its subsequent refinements guarantee the existence and uniqueness of a (maximal) Cauchy development, they provide no information about its geodesic completeness and thus, in the language of partial differential equations, constitutes a local existence. ,,, More generally, there are a number of conditions that will guarantee the space-time will be geodesically incomplete.,,, In the language of partial differential equations this means an impossibility of a large data global existence result for all initial data in General Relativity.
The following article speaks of a proof developed by legendary mathematician Kurt Gödel, from a thought experiment, in which Gödel showed General Relativity could not be a complete description of the universe:
Excerpt: Gödel's personal God is under no obligation to behave in a predictable orderly fashion, and Gödel produced what may be the most damaging critique of general relativity. In a Festschrift, (a book honoring Einstein), for Einstein's seventieth birthday in 1949, Gödel demonstrated the possibility of a special case in which, as Palle Yourgrau described the result, "the large-scale geometry of the world is so warped that there exist space-time curves that bend back on themselves so far that they close; that is, they return to their starting point." This means that "a highly accelerated spaceship journey along such a closed path, or world line, could only be described as time travel." In fact, "Gödel worked out the length and time for the journey, as well as the exact speed and fuel requirements." Gödel, of course, did not actually believe in time travel, but he understood his paper to undermine the Einsteinian worldview from within.
The fact that photons are shown to travel as uncollapsed quantum information waves in the double slit experiment, and not as collapsed particles, is what gives us a solid reason for proposing this mechanism of the universal quantum wave collapse of photons to each conscious observer.
Double-slit experiment
Excerpt: In quantum mechanics, the double-slit experiment (often referred to as Young's experiment) demonstrates the inseparability of the wave and particle natures of light and other quantum particles. A coherent light source (e.g., a laser) illuminates a thin plate with two parallel slits cut in it, and the light passing through the slits strikes a screen behind them. The wave nature of light causes the light waves passing through both slits to interfere, creating an interference pattern of bright and dark bands on the screen. However, at the screen, the light is always found to be absorbed as though it were made of discrete particles, called photons.,,, Any modification of the apparatus that can determine (that can let us observe) which slit a photon passes through destroys the interference pattern, illustrating the complementarity principle; that the light can demonstrate both particle and wave characteristics, but not both at the same time.
Double Slit Experiment – Explained By Prof Anton Zeilinger – video
Also of note is exactly how well established quantum theory is as to being 'correct':
An experimental test of all theories with predictive power beyond quantum theory - May 2011
Excerpt: Hence, we can immediately refute any already considered or yet-to-be-proposed alternative model with more predictive power than this (quantum theory).
It is also interesting to note that materialists, instead of dealing forthrightly with the Theistic implications of quantum wave collapse, postulated quasi-infinite parallel universes, i.e. Many-Worlds, in which any absurdity would not be out of bounds in the infinite parallel universes i.e. Elvis could be president, pink elephants, etc.. etc.. in the Many-Worlds model;
Quantum mechanics
Excerpt: The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[43] This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet:
This following experiment extended the double slit experiment to show that the 'spooky actions', for instantaneous quantum wave collapse, happen regardless of any considerations for time or distance i.e. The following experiment shows that quantum actions are 'universal and instantaneous':
Wheeler's Classic Delayed Choice Experiment:
Genesis, Quantum Physics and Reality
And of course all this leads us back to this question. "What does our conscious observation have to do with anything in collapsing the wave function of the photon in the double slit experiment and in the universe?",,
Moreover, What is causing the quantum waves to collapse from their 'higher dimension' in the first place since we 'conscious' humans are definitely not the ones who are causing the photon waves to collapse to their 'uncertain 3D wave/particle' state? With the refutation of the materialistic 'hidden variable' argument and with the patent absurdity of the materialistic 'Many-Worlds' hypothesis, then I can only think of one sufficient explanation for quantum wave collapse to photon;
Psalm 118:27
God is the LORD, who hath shown us light:,,,
In the following article, Physics Professor Richard Conn Henry is quite blunt as to what quantum mechanics reveals to us about the 'primary cause' of our 3D reality:
Art Battson - Access Research Group
Personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist." The following comment from a blogger on UD reflects fairly closely how I, as a Christian, view reality;
"I do believe in the physical, concrete universe as real. It isn’t just an illusion. However, being a Christian, I can say, also, that the spiritual realm is even more real than the physical. More real, in this sense, however, isn’t to be taken to mean that the physical is “less” real, but that it is less important. The physical, ultimately, really derives its significance from the spiritual, and not the other way around. I submit to you, though, that the spiritual reality, in some sense, needs the physical reality, just as a baseball game needs a place to be played. The game itself may be more important than the field, but the game still needs the field in order to be played. The players are the most important part of the game, but without bats, balls, and gloves, the players cannot play. Likewise, without a physical, concrete reality, the spiritual has “no place to play”. Love, without a concrete reality, has no place to act out its romance; joy has nothing to jump up and down on, and consciousness has nothing to wake up to." - Brent - UD Blogger
Professor Henry's bluntness on the implications of quantum mechanics continues here:
As Professor Henry pointed out, it has been known since the discovery of quantum mechanics itself, early last century, that the universe is indeed 'Mental', as is illustrated by this quote from Max Planck.
Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck Planck was a devoted Christian from early life to death, was a churchwarden from 1920 until his death, and believed in an almighty, all-knowing, beneficent God (though, paradoxically, not necessarily a personal one) This deep 'Christian connection', of Planck, is not surprising when you realize practically every, if not every, founder of each major branch of modern science also ‘just so happened’ to have some kind of a deep Christian connection.)
Colossians 1:17
Psalm 33:13-15
Moreover, the argument for God from consciousness can be framed like this:
1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality.
2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality.
4. Therefore, consciousness is found to precede material reality.
The expansion of every 3D point in the universe, and the quantum wave collapse of the entire universe to each point of conscious observation in the universe, is obviously a very interesting congruence in science between the very large (relativity) and the very small (quantum mechanics). A congruence that Physicists, and Mathematicians, seem to be having a extremely difficult time 'unifying' into a 'theory of everything'.(Einstein, Penrose).
The Physics Of The Large And Small: What Is the Bridge Between Them?
Roger Penrose
Excerpt: This, (the unification of General Relativity and Quantum Field theory), would also have practical advantages in the application of quantum ideas to subjects like biology - in which one does not have the clean distinction between a quantum system and its classical measuring apparatus that our present formalism requires. In my opinion, moreover, this revolution is needed if we are ever to make significant headway towards a genuine scientific understanding of the mysterious but very fundamental phenomena of conscious mentality.
Quantum Mechanics Not In Jeopardy: Physicists Confirm Decades-Old Key Principle Experimentally - July 2010
Excerpt: the research group led by Prof. Gregor Weihs from the University of Innsbruck and the University of Waterloo has confirmed the accuracy of Born’s law in a triple-slit experiment (as opposed to the double slit experiment). "The existence of third-order interference terms would have tremendous theoretical repercussions - it would shake quantum mechanics to the core," says Weihs. The impetus for this experiment was the suggestion made by physicists to generalize either quantum mechanics or gravitation - the two pillars of modern physics - to achieve unification, thereby arriving at a one all-encompassing theory. "Our experiment thwarts these efforts once again," explains Gregor Weihs. (of note: Born's Law is an axiom that dictates that quantum interference can only occur between pairs of probabilities, not triplet or higher order probabilities. If they would have detected higher order interference patterns this would have potentially allowed a reformulation of quantum mechanics that is compatible with, or even incorporates, gravitation.)
"There are serious problems with the traditional view that the world is a space-time continuum. Quantum field theory and general relativity contradict each other. The notion of space-time breaks down at very small distances, because extremely massive quantum fluctuations (virtual particle/antiparticle pairs) should provoke black holes and space-time should be torn apart, which doesn’t actually happen." - G J Chaitin
The conflict of reconciling General Relativity and Quantum Mechanics appears to arise from the inability of either theory to successfully deal with the Zero/Infinity problem that crops up in different places of each theory:
Excerpt: The biggest challenge to today's physicists is how to reconcile general relativity and quantum mechanics. However, these two pillars of modern science were bound to be incompatible. "The universe of general relativity is a smooth rubber sheet. It is continuous and flowing, never sharp, never pointy. Quantum mechanics, on the other hand, describes a jerky and discontinuous universe. What the two theories have in common - and what they clash over - is zero.",, "The infinite zero of a black hole -- mass crammed into zero space, curving space infinitely -- punches a hole in the smooth rubber sheet. The equations of general relativity cannot deal with the sharpness of zero. In a black hole, space and time are meaningless.",, "Quantum mechanics has a similar problem, a problem related to the zero-point energy. The laws of quantum mechanics treat particles such as the electron as points; that is, they take up no space at all. The electron is a zero-dimensional object,,, According to the rules of quantum mechanics, the zero-dimensional electron has infinite mass and infinite charge.
Quantum Mechanics and Relativity – The Collapse Of Physics? – video – with notes as to plausible reconciliation that is missed by materialists
How Quantum Gravity Destroys Physicalism - video
Moreover, this extreme ‘mathematical difficulty', of reconciling General Relativity with Quantum Mechanics into the much sought after 'Theory of Everything', was actually somewhat foreseeable from previous work, earlier in the 20th century, in mathematics by Godel:
Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes.
The following scientist offers a very interesting insight into this issue of 'reconciling' the mental universe of Quantum Mechanics with the space-time of General Relativity:
How the Power of Intention Alters Matter - Dr. William A. Tiller
Excerpt: "Most people think that the matter is empty, but for internal self consistency of quantum mechanics and relativity theory, there is required to be the equivalent of 10 to 94 grams of mass energy, each gram being E=MC2 kind of energy. Now, that's a huge number, but what does it mean practically? Practically, if I can assume that the universe is flat, and more and more astronomical data is showing that it's pretty darn flat, if I can assume that, then if I take the volume or take the vacuum within a single hydrogen atom, that's about 10 to the minus 23 cubic centimeters. If I take that amount of vacuum and I take the latent energy in that, there is a trillion times more energy there than in all of the mass of all of the stars and all of the planets out to 20 billion light-years. That's big, that's big. And if consciousness allows you to control even a small fraction of that, creating a big bang is no problem." - Dr. William Tiller - has been a professor at Stanford U. in the Department of materials science & Engineering
This following experiment is really interesting as to establishing the plausibility of Tiller's preceding hypothesis:
Scientific Evidence That Mind Effects Matter - Random Number Generators - video
I once asked a evolutionist, after showing him the preceding experiment, "Since you ultimately believe that the 'god of random chance/chaos' produced everything we see around us, what in the world is my mind doing pushing your god around?"
Yet, to continue on, the unification, into a 'theory of everything', between what is in essence the 'infinite Theistic world of Quantum Mechanics' and the 'finite Materialistic world of the space-time of General Relativity' seems to be directly related to what Jesus apparently joined together with His resurrection, i.e. related to the unification of infinite God with finite man. Dr. William Dembski in this following comment, though not directly addressing the Zero/Infinity conflict in General Relativity and Quantum Mechanics, offers insight into this 'unification' of the infinite and the finite:
The End Of Christianity - Finding a Good God in an Evil World - Pg.31
William Dembski PhD. Mathematics
Also of related interest to this ‘Zero/Infinity conflict of reconciliation’, between General Relativity and Quantum Mechanics, is that a ‘uncollpased’ photon, in its quantum wave state, is mathematically defined as ‘infinite’ information:
Wave function
Quantum Computing – Stanford Encyclopedia
Single photons to soak up data:
It is important to note that the following experiment actually encoded information into a photon while it was in its quantum wave state, thus destroying the notion, held by many, that the wave function was not 'physically real' but was merely 'abstract'. i.e. How can information possibly be encoded into something that is not physically real but merely abstract?
Ultra-Dense Optical Storage - on One Photon
The following paper mathematically corroborated the preceding experiment and cleaned up some pretty nasty probabilistic incongruities that arose from a purely statistical interpretation, i.e. it seems that stacking a ‘random infinity', (parallel universes to explain quantum wave collapse), on top of another ‘random infinity', to explain quantum entanglement, leads to irreconcilable mathematical absurdities within quantum mechanics:
Quantum Theory's 'Wavefunction' Found to Be Real Physical Entity: Scientific American - November 2011
Excerpt: David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. "This strips away obscurity and shows you can't have an interpretation of a quantum state as probabilistic," he says.
The quantum (wave) state cannot be interpreted statistically - November 2011
Moreover there is actual physical evidence that lends strong support to the position that the 'Zero/Infinity conflict', we find between General Relativity and Quantum Mechanics, was successfully dealt with by Christ:
Turin Shroud Enters 3D Age - Pictures, Articles and Videos
Turin Shroud 3-D Hologram - Face And Body - Dr. Petrus Soons - video
A Quantum Hologram of Christ's Resurrection? by Chuck Missler
Shroud Of Turin Is Authentic, Italian Study Suggests - December 2011
Scientists say Turin Shroud is supernatural - December 2011
Press release Video on preceding paper:
Cellular Communication through Light
The Real Bioinformatics Revolution - Proteins and Nucleic Acids 'Singing' to One Another?
Excerpt: the molecules send out specific frequencies of electromagnetic waves which not only enable them to ‘see' and ‘hear' each other, as both photon and phonon modes exist for electromagnetic waves, but also to influence each other at a distance and become ineluctably drawn to each other if vibrating out of phase (in a complementary way).,,, More than 1 000 proteins from over 30 functional groups have been analysed. Remarkably, the results showed that proteins with the same biological function share a single frequency peak while there is no significant peak in common for proteins with different functions; furthermore the characteristic peak frequency differs for different biological functions. ,,, The same results were obtained when regulatory DNA sequences were analysed.
Are humans really beings of light?
Vicky Noratuk
"Miracles do not happen in contradiction to nature, but only in contradiction to that which is known to us of nature."
St. Augustine
Thus, when one allows God into math, as Godel indicated must ultimately be done to keep math from being 'incomplete', then there actually exists a very credible, empirically backed, reconciliation between Quantum Mechanics and General Relativity into a 'Theory of Everything'! Yet it certainly is one that many dogmatic Atheists will try to deny the relevance of.,,, As a footnote; Godel, who proved you cannot have a mathematical ‘Theory of Everything’, without allowing God to bring completeness to the 'Theory of Everything', also had this to say:
The God of the Mathematicians – Goldman
Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” – Kurt Gödel – (Gödel is considered by many to be the greatest mathematician of the 20th century)
Philippians 2: 5-11
While I agree with a criticism, from a Christian, that was leveled against the preceding Shroud of Turin video, that God indeed needed no help from the universe in the resurrection event of Christ, I am, none-the-less, very happy to see that what is considered the number one problem of Physicists and Mathematicians in physics today, of a unification into a 'theory of everything' for what is in essence the finite materialistic world of General Relativity and the infinite Theistic world of Quantum Mechanics, does in fact seem to find a credible successful resolution for 'unification' within the resurrection event of Jesus Christ Himself. It seems almost overwhelmingly apparent to me from the 'scientific evidence' we now have that Christ literally ripped a hole in the finite entropic space-time of this universe to reunite infinite God with finite man. That modern science would even offer such a almost tangible glimpse into the mechanics of what happened in the tomb of Christ should be a source of great wonder and comfort for the Christian heart.
Psalms 16:10
Acts 2:31
A shortened form of the evidence is here:
It is also interesting to note that 'higher dimensional' mathematics had to be developed before Einstein could elucidate General Relativity, or even before Quantum Mechanics could be elucidated;
The Mathematics Of Higher Dimensionality – Gauss & Riemann – video
3D to 4D shift - Carl Sagan - video with notes
I think it should be fairly clear by now that, much contrary to the mediocrity of earth and of humans brought about by the heliocentric discoveries of Galileo and Copernicus, the findings of modern science are very comforting to Theistic postulations in general, and even lends strong support of plausibility to the main tenet of Christianity which holds Jesus Christ is the only begotten Son of God.
Matthew 28:18
And Jesus came up and spoke to them, saying, "All authority has been given to Me in heaven and upon earth."
Of related note; there is a mysterious 'higher dimensional' component to life:
Excerpt: Many fundamental characteristics of organisms scale
with body size as power laws of the form:
Y = Yo M^b,
4-Dimensional Quarter Power Scaling In Biology - video
Of related note to 'invariant' patterns found in life that are inexplicable for natural selection to explain the origination of:
Chargaff’s “Grammar of Biology”: New Fractal-like Rules - 2011
Excerpt from Conclusion: It was shown that these rules are valid for a large set of organisms: bacteria, plants, insects, fish and mammals. It is noteworthy that no matter the word length the same pattern is observed (self-similarity). To the best of our knowledge, this is the first invariant genomic properties publish(ed) so far, and in Science invariant properties are invaluable ones and usually they have practical implications.
Though Jerry Fodor and Massimo Piatelli-Palmarini rightly find it inexplicable for 'random' Natural Selection to be the rational explanation for the invariant scaling of the physiology, and anatomy, of living things to four-dimensional parameters, they do not seem to fully realize the implications this 'four dimensional scaling' of living things presents. This 4-D scaling is something we should rightly expect from a Intelligent Design perspective. This is because Intelligent Design holds that ‘higher dimensional transcendent information’ is more foundational to life, and even to the universe itself, than either matter or energy are. This higher dimensional 'expectation' for life, from a Intelligent Design perspective, is directly opposed to the expectation of the Darwinian framework, which holds that information, and indeed even the essence of life itself, is merely an 'emergent' property of the 3-D material realm.
Earth’s crammed with heaven,
And every common bush afire with God;
But only he who sees, takes off his shoes,
The rest sit round it and pluck blackberries.
- Elizabeth Barrett Browning
Quantum entanglement holds together life’s blueprint - 2010
Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.
The relevance of continuous variable entanglement in DNA - July 2010
Excerpt: We consider a chain of harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. The binding energies between entangled and classically correlated states are compared. We apply our model to DNA. By comparing our model with numerical simulations we conclude that entanglement may play a crucial role in explaining the stability of the DNA double helix.
Quantum Information/Entanglement In DNA & Protein Folding - short video
Quantum Computing in DNA – Stuart Hameroff
Excerpt: Hypothesis: DNA utilizes quantum information and quantum computation for various functions. Superpositions of dipole states of base pairs consisting of purine (A,G) and pyrimidine (C,T) ring structures play the role of qubits, and quantum communication (coherence, entanglement, non-locality) occur in the “pi stack” region of the DNA molecule.,,, We can then consider DNA as a chain of qubits (with helical twist).
Output of quantum computation would be manifest as the net electron interference pattern in the quantum state of the pi stack, regulating gene expression and other functions locally and nonlocally by radiation or entanglement.
Quantum Action confirmed in DNA by direct empirical research;
DNA Can Discern Between Two Quantum States, Research Shows - June 2011
Does DNA Have Telepathic Properties?-A Galaxy Insight - 2009
Excerpt: The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
Can Quantum Mechanics Play a Role in DNA Damage Detection? (Short answer; YES!) – video - as well at about 27 Minute mark in the video - Fröhlich Condensation and Quantum Consciousness
It turns out that quantum information has been confirmed to be in protein structures as well;
Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73
Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.
Quantum states in proteins and protein assemblies:
Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign.
Myosin Coherence
Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects.
Here's another measure for quantum information in protein structures:
Proteins with cruise control provide new perspective:
The preceding is solid confirmation that far more complex information resides in proteins than meets the eye, for the calculus equations used for ‘cruise control’, that must somehow reside within the quantum information that is ‘constraining’ the entire protein structure to its ‘normal’ state, is anything but ‘simple classical information’. For a sample of the equations that must be dealt with, to ‘engineer’ even a simple process control loop like cruise control along a entire protein structure, please see this following site:
PID controller
It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Yet it is also very interesting to note, in Darwinism's inability to explain this 'transcendent quantum effect' adequately, that Theism has always postulated a transcendent component to man that is not constrained by time and space. i.e. Theism has always postulated a 'living soul' for man that lives past the death of the body.
Genesis 2:7
Falsification Of Neo-Darwinism by Quantum Entanglement/Information
A few comments on ‘non-local’ epigenetic information implicated in 3-D spatial organization of Body Plans:
Here are several more scriptures on man's 'eternal soul' at this following site:
Bible Reference Notes: “Hell and the Eternal Soul.”
Further notes:
The Unbearable Wholeness of Beings - Steve Talbott
The ‘Fourth Dimension’ Of Living Systems
Quantum no-hiding theorem experimentally confirmed for first time - March 2011
Quantum no-deleting theorem
Does the fact that quantum information, which cannot be created nor destroyed, is found in molecular biology, at such a foundational level and such a massive scale, provide conclusive proof for the 'living soul' of man??? Well, all by itself, maybe not 'conclusive proof' in the strictest sense of the notion, but it certainly makes the question 'Does man have a living soul?' a whole lot more integrated to how the foundation of reality itself is found to be structured! i.e. Makes it far more credible scientifically!
Of related interest, this following article is interesting for it draws attention to the fact that humans 'just so happen' to be near the logarithmic center of the universe, between Planck's length and the cosmic horizon of the cosmic background radiation (10^-33 cm and 10^28 cm respectively) .
The View from the Centre of the Universe by Nancy Ellen Abrams and Joel R. Primack
Excerpt: The size of a human being is near the centre of all possible sizes.
Scale of the Universe (From Planck length to the Cosmic Background Radiation) - interactive scale
As to the fact that, as far as the solar system itself is concerned, the earth is not 'central', I find the fact that this seemingly insignificant earth is found to revolve around the much more massive sun to be a 'poetic reflection' of our true spiritual condition. In regards to God's 'kingdom of light', are we not to keep in mind our lives are to be guided by the much higher purpose which is tied to our future in God's kingdom of light? Are we not to avoid placing too much emphasis on what this world has to offer, since it is so much more insignificant than what heaven has to offer?
Louie Giglio - How Great Is Our God - Part 2 - video
You could fit 262 trillion earths inside (the star of) Betelgeuse. If the Earth were a golfball that would be enough to fill up the Superdome (football stadium) with golfballs,,, 3000 times!!! When I heard that as a teenager that stumped me right there because most of my praying had been advising God, correcting God, suggesting things to God, drawing diagrams for God, reviewing things with God, counseling God. - Louie Giglio
C.S. Lewis
Sara Groves - You Are The Sun - Music video
Psalm 8: 3-4
Journey Through the Universe - George Smoot- Frank Turek - video
The Catholic church put Galileo on trial for teaching planets revolve around the sun. They found him guilty of heresy; forced him to recant publicly what he had written; then placed him under house arrest. The religious leaders are said to have done this to Galileo because this supposed 'heresy' of Galileo is thought to have upset the basic biblical belief of man being made in God's image. Though the actual story of how science and religion became separated is a lot subtler than what is currently believed from the Galileo affair, (Why Galileo was Wrong, Even Though He was Right), this particular episode between the church and Galileo is now generally looked at as the start of what most people presume is a great divide between science and religion which has lasted for several centuries. Yet despite this common perception of a great divide, within the last century there has been a veritable avalanche of discovery, from many diverse fields of science, which has greatly narrowed this perception of a 'great divide' between science and religion.
The Return of the God Hypothesis - Stephen Meyer
video lecture:
Multiple Competing Worldviews - Stephen Meyer on John Ankerberg - video - November 4, 2011
Richard Dawkins Lies About William Lane Craig AND Logic! - video and article defending each argument
Evidence For the Existence of God - William Lane Craig - video lecture defending each argument
The narrowing of this great divide started with astronomer Edwin Hubble's (1889-1953) discovery, in 1929, of galaxies speeding away from each other. This, as well as many other discoveries confirming the Big Bang, has firmly established the universe actually had a beginning just as theologians have always claimed that it does.
Beyond The Big Bang: William Lane Craig Templeton Foundation Lecture (HQ) 1/6 - video
William Lane Craig vs Peter Millican: "Does God Exist?", Birmingham University, October 2011 - video
The Scientific Evidence For The Big Bang - Michael Strauss PhD. - video
Evidence Supporting the Big Bang
Quantum Evidence for a Theistic Big Bang
Robert Wilson – Nobel laureate – co-discover Cosmic Background Radiation
George Smoot – Nobel laureate in 2006 for his work on COBE
“,,,the astronomical evidence leads to a biblical view of the origin of the world,,, the essential element in the astronomical and biblical accounts of Genesis is the same.”
Robert Jastrow – Founder of NASA’s Goddard Institute – Pg.15 ‘God and the Astronomers’
,,, 'And if your curious about how Genesis 1, in particular, fairs. Hey, we look at the Days in Genesis as being long time periods, which is what they must be if you read the Bible consistently, and the Bible scores 4 for 4 in Initial Conditions and 10 for 10 on the Creation Events'
Hugh Ross - Evidence For Intelligent Design Is Everywhere; video
Prof. Henry F. Schaefer cites several interesting quotes, from leading scientists in the field of Big Bang cosmology, about the Theological implications of the Big Bang in the following video:
The Big Bang and the God of the Bible - Henry Schaefer PhD. - video
Entire video:
"The Big Bang represents an immensely powerful, yet carefully planned and controlled release of matter, energy, space and time. All this is accomplished within the strict confines of very carefully fine-tuned physical constants and laws. The power and care this explosion reveals exceeds human mental capacity by multiple orders of magnitude."
Prof. Henry F. Schaefer - closing statement of part 5 of preceding video
Dr. William Lane Craig defends the Kalam Cosmological argument for the existence of God against various attempted refutations - video playlist
Hugh Ross PhD. - Evidence For The Transcendent Origin Of The Universe - video
What Atheists Just Don't Get (About God) - Video
What Contemporary Physics and Philosophy Tell Us About Nature and God - Fr. Spitzer & Dr. Bruce Gordon (Dr. Gordon speaks for the last 25 minutes) - video
Are Many Worlds and the Multiverse the Same Idea? - Sean Carroll
Excerpt: When cosmologists talk about “the multiverse,” it’s a slightly poetic term. We really just mean different regions of spacetime, far away so that we can’t observe them, but nevertheless still part of what one might reasonably want to call “the universe.” In inflationary cosmology, however, these different regions can be relatively self-contained — “pocket universes,” as Alan Guth calls them.
Here is a video of Alan Guth,,,
Did Our Universe have a Beginning? (Alan Guth) - video
,,,Where towards the very end of the video, after considering some fairly exotic materialistic scenarios of 'eternal inflation' of 'pocket universes', Alan Guth concedes that "The ultimate theory for the origin of the universe is still very much up for grabs".
Alexander Vilenkin is far more direct than Alan Guth:
"The conclusion is that past-eternal inflation is impossible without a beginning."
Alexander Vilenkin - from pg. 35 'New Proofs for the Existence of God' by Robert J. Spitzer (of note: A elegant thought experiment of a space traveler traveling to another galaxy, that Borde, Guth, and Vilenkin, used to illustrate the validity of the proof, is on pg. 35 of the book as well.)
Cosmologist Alexander Vilenkin of Tufts University in Boston
How Atheists Take Alexander Vilenkin (& the BVG Theorem) Out Of Context - William Lane Craig - video
Genesis 1:1-3
This following video gives a very small glimpse at the power involved when God said 'Let there be light':
God's Creative and Sustaining Word - Dr. Don Johnson - video
The following video and article are very suggestive as to providing almost tangible proof for God 'speaking' reality into existence:
The Deep Connection Between Sound & Reality - Evan Grant - Allosphere - video
Music of the sun recorded by scientists - June 2010
Excerpt: The sun has been the inspiration for hundreds of songs, but now scientists have discovered that the star at the centre of our solar system produces its own music.
The following video is cool:
What pi sounds like (when put to music) - cool video
It is also very interesting to note that among all the 'holy' books, of all the major religions in the world, only the Holy Bible was correct in its claim for a transcendent origin of the universe. Some later 'holy' books, such as the Mormon text "Pearl of Great Price" and the Qur'an, copy the concept of a transcendent origin from the Bible but also include teachings that are inconsistent with that now established fact. (Ross; Why The Universe Is The Way It Is; Pg. 228; Chpt.9; note 5)
The Most Important Verse in the Bible - Prager University - video
The Uniqueness of Genesis 1:1 - William Lane Craig - video
This discovery, of a beginning for the universe, has crushed the materialistic belief that postulated the universe has always existed and had no beginning.
Christianity and The Birth of Science - Michael Bumbulis, Ph.D
Christianity Gave Birth To Each Scientific Discipline - Dr. Henry Fritz Schaefer - video
In this short video, Dr. Stephen Meyer notes that the early scientists were Christians whose faith motivated them to learn more about their Creator…
Dr. Meyer on the Christian History of Science - video
A Short List Of The Christian Founders Of Modern Science
Founders of Modern Science Who Believe in GOD - Tihomir Dimitrov
The following is a good essay, by Robert C. Koons, in which the popular misconception of a war between science and religion, that neo-Darwinists often use in public to defend their, ironically, pseudo-scientific position, is in fact a gross misrepresentation of the facts. For not only does Robert Koons find Theism, particularly Chistian Theism, absolutely vital to the founding of modern science, but also argues that the Theistic worldview is necessary for the long term continued success of science into the future:
Science and Theism: Concord, not Conflict* – Robert C. Koons
IV. The Dependency of Science Upon Theism (Page 21)
Excerpt: Far from undermining the credibility of theism, the remarkable success of science in modern times is a remarkable confirmation of the truth of theism. It was from the perspective of Judeo-Christian theism—and from the perspective alone—that it was predictable that science would have succeeded as it has. Without the faith in the rational intelligibility of the world and the divine vocation of human beings to master it, modern science would never have been possible, and, even today, the continued rationality of the enterprise of science depends on convictions that can be reasonably grounded only in theistic metaphysics.
The Origin of Science
Jerry Coyne on the Scientific Method and Religion - Michael Egnor - June 2011
Excerpt: The scientific method -- the empirical systematic theory-based study of nature -- has nothing to so with some religious inspirations -- Animism, Paganism, Buddhism, Hinduism, Shintoism, Islam, and, well, atheism. The scientific method has everything to do with Christian (and Jewish) inspiration. Judeo-Christian culture is the only culture that has given rise to organized theoretical science. Many cultures (e.g. China) have produced excellent technology and engineering, but only Christian culture has given rise to a conceptual understanding of nature.
Christianity Is a Science-Starter, Not a Science-Stopper By Nancy Pearcey
The 'Person Of Christ' was, and is, necessary for science to start and persist!
Bruce Charlton's Miscellany - October 2011
In spite of the fact that modern science can be forcefully argued to owe its very existence to Christianity, many scientists before Hubble's discovery had been swayed by the materialistic philosophy and had thus falsely presumed the universe itself was infinite in size as well as falsely presumed it was eternal in duration. This 'simplistic' conclusion of theirs seems to stem from the fact that it is self evident that something cannot come from nothing, and they simply could not envision the logical necessity of a eternal transcendent Being who created this material realm. The materialistic philosophy was slightly supported by the first law of thermodynamics which states energy can neither be created nor destroyed by any material means. This belief of the universe having no beginning had held the upper hand in scientific circles even though the very next law, the second law of thermodynamics, 'entropy', or the law of universal decay into equilibrium, had raised some serious doubts about the validity of believing the universe had no beginning. As well in mathematics, in overlapping congruence with entropy, the mathematical impossibility of a temporal infinite regression of causes demanded a beginning for the universe; i.e. the existence of a material reality within time called for an 'Alpha', an 'Uncaused Cause', for the material universe that transcends the material universe.
William Lane Craig - Hilbert's Hotel - The Absurdity Of An Infinite Regress Of 'Things' - video
Time Cannot Be Infinite Into The Past - video
If there's a beginning, must there be a cause for that beginning? - Stephen Meyer - video
Does God Exist? - Argument From The Origin Of Nature - Kirk Durston - video
entire video:
The First Cause Must Be Different From All Other Causes - T.G. Peeler
Einstein's general relativity equation has now been extended to confirm not only did matter and energy have a beginning in the Big Bang, but space-time also had a beginning. i.e. The Big Bang was an absolute origin of space-time, matter-energy, and as such demands a cause which transcends space-time, matter-energy.
(Hawking, Penrose, Ellis) - 1970
In conjunction with the mathematical, and logical, necessity of an 'Uncaused Cause' to explain the beginning of the universe, in philosophy it has been shown that,,,
"The 'First Mover' is necessary for change occurring at each moment."
Michael Egnor - Aquinas’ First Way
I find this centuries old philosophical argument, for the necessity of a 'First Mover' accounting for change occurring at each moment, to be validated by quantum mechanics. One line of evidence arises from the smallest indivisible unit of time; Planck time:
Planck time
Excerpt: One Planck time is the time it would take a photon travelling at the speed of light to cross a distance equal to one Planck length. Theoretically, this is the smallest time measurement that will ever be possible,[3] roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change. As of May 2010, the smallest time interval that was directly measured was on the order of 12 attoseconds (12 × 10^−18 seconds),[4] about 10^24 times larger than the Planck time.
The 'first mover' is further warranted to be necessary from quantum mechanics since the possibility for the universe to be considered a self-sustaining 'closed loop' of cause and effect is removed with the refutation of the 'hidden variable' argument, as first postulated by Einstein, in quantum entanglement experiments. As well, there also must be a sufficient transcendent cause (God/First Mover) to explain quantum wave collapse for 'each moment' of the universe.
God is the ultimate existence which grounds all of reality
It is also interesting to note that materialists, instead of honestly dealing with the obvious theistic implications of quantum mechanics, will many times invoke something called Everett's Many Worlds interpretation, also referred to as decoherence, when dealing with quantum mechanics. Yet this 'solution' ends up creating profound absurdities of logic rather than providing any rational solution:
Quantum mechanics
Excerpt: The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[39] This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet:
Perhaps some may say that Everett’s Many Worlds interpretation of infinite parallel universes is not so absurd after all, if so,, then in some other parallel universe in which you also live, Elvis just so happens to be president of the United states, and you just so happen to come to the opposite conclusion, in that parallel universe, that Many Worlds is in fact absurd! For me, I find that type of 'flexible thinking', stemming from Many Worlds, to be completely absurd!!! Moreover, that one example from Many Worlds, of Elvis being President, is just small potatoes to the levels of absurdity that we would actually be witnessing if Many Worlds were the truth for how reality was constructed.
As a interesting sidelight to this, Einstein hated the loss of determinism that quantum mechanics brought forth to physics, as illustrated by his infamous 'God does not play dice' quote. yet on a deeper philosophical level, I’ve heard one physics professor say something to the effect that the lack of strict determinism in quantum wave collapse actually restored the free will of man to its rightful place, or probably more correctly he said something more like this,,, ‘The proof of free will is found in the indeterminacy of the quantum wave collapse'. I find this statement to be especially true now that conscious observation is shown to be primary to quantum wave collapse to a quasi-3D particle state, for how could our thoughts truly be free if they were merely the result of particle fluctuations in our brain, whether random or deterministic fluctuations of particles? Moreover as is quite obvious to most people, free will is taken as seriously true by all societies, or else why should we spank our children or punish anybody in jails if they truly had no free will to control their actions? Indeed what right would God have to judge anyone if they truly had no free will?
Though I feel very, very, comfortable with how the evidence fits the Theistic model of Quantum Mechanics, in which God is the cause of wave function/packet collapse to each unique observer in the universe, The following article deconstructs many, if not all, of the 'alternative' Quantum Mechanic Models.
The Metaphysics of Quantum Mechanics - James Daniel Sinclair - October 2010
Abstract: Is the science of Quantum Mechanics the greatest threat to Christianity? Some years ago the journal Christianity Today suggested precisely that. It is true that QM is a daunting subject. This barrier is largely responsible for the fear. But when the veil is torn away, the study of QM builds a remarkably robust Christian apologetic. When pragmatic & logically invalid interpretations are removed, there remain four possibilities for the nature of reality (based on the work of philosopher Henry Stapp). Additional analysis shows two are exclusive to theism. The third can be formulated with or without God. The last is consistent only with atheism. By considering additional criteria, options that deny God can be shown to be false.
This following video is very good, and easy to understand, for pointing out some of the unanswerable dilemmas that quantum mechanics presents to the atheistic philosophy of materialism as materialism is popularly understood:
Dr. Quantum - Double Slit Experiment & Entanglement - video
(Double Slit) A Delayed Choice Quantum Eraser - updated 2007
Astrophysicist John Gribbin comments on the Renninger experiment here:
Solving the quantum mysteries - John Gribbin
Excerpt: From a 50:50 probability of the flash occurring either on the hemisphere or on the outer sphere, the quantum wave function has collapsed into a 100 per cent certainty that the flash will occur on the outer sphere. But this has happened without the observer actually "observing" anything at all! It is purely a result of a change in the observer's knowledge about what is going on in the experiment.
i.e. The detector is completely removed as to being the primary cause of quantum wave collapse in the experiment. As Richard Conn Henry clearly implied previously, in the experiment it is found that 'The physical environment' IS NOT sufficient within itself to 'create reality', i.e. 'The physical environment' IS NOT sufficient to explain quantum wave collapse to a 'uncertain' 3D particle.
Why, who makes much of a miracle? As to me, I know of nothing else but miracles, Whether I walk the streets of Manhattan, Or dart my sight over the roofs of houses toward the sky,,,
Walt Whitman - Miracles
That the mind of a individual observer would play such an integral, yet not complete ‘closed loop’ role, in instantaneous quantum wave collapse to uncertain 3-D particles, gives us clear evidence that our mind is a unique entity. A unique entity with a superior quality of existence when compared to the uncertain 3D particles of the material universe. This is clear evidence for the existence of the ‘higher dimensional mind’ of man that supersedes any material basis that the mind has been purported to emerge from by materialists. I would also like to point out that the ‘effect’, of universal quantum wave collapse to each ‘central 3D observer’ in the universe (Wheeler; Delayed Choice, Wigner; Quantum Symmetries), gives us clear evidence of the extremely special importance that the ’cause’ of the ‘Infinite Mind of God’ places on each of our own individual souls/minds.
Psalm 139:17-18
These following studies and videos confirm this 'superior quality' of existence for our souls/minds:
Darwinian Evolution Vs. Consciousness (Soul) - video (notes in description of video)
Removing Half of Brain Improves Young Epileptics' Lives:
‘Surprisingly’, at the molecular level, the cells of the brain are found to be extremely ‘plastic’ to changes in ‘activity in the brain’ which is, or course, completely contrary to the reductive materialistic view of the mind ‘emerging’ from the material brain;
DNA Dynamism - PaV - October 2011
Further notes on the transcendence of 'mind':
Famous Cardiac Surgeon’s Stories of Near Death Experiences in Surgery
The Scientific Evidence for Near Death Experiences - Dr Jeffery Long - Melvin Morse M.D. - video
Of interest to Near Death Experiences is the fact that many Experiencers say that when they look at their body, while having a Near Death Experience, they find that their body is made of light. Well interestingly, it is found that humans emit 'ultra-weak' light;
Cellular Communication through Light
Are humans really beings of light?
Vicky Noratuk
Quantum Consciousness - Time Flies Backwards? - Stuart Hameroff MD
Particular quote of note from preceding video;
James J. Hurtak, Ph.D.
Brain ‘entanglement’ could explain memories - January 2010
In part three of this following video is a very interesting finding that indicates that 'transcendent' quantum coherence within the brain is far less active upon a person entering a sleeping state:
Through The Wormhole S2 E1 (1/6)
The preceding 'quantum evidence' provides a foundation for a plausible 'transcendent mechanism' for the following study:
Bridging the Gap - October 2011
Excerpt: Like a bridge that spans a river to connect two major metropolises, the corpus callosum is the main conduit for information flowing between the left and right hemispheres of our brains. Now, neuroscientists at the California Institute of Technology (Caltech) have found that people who are born without that link—a condition called agenesis of the corpus callosum, or AgCC—still show remarkably normal communication across the gap between the two halves of their brains.
This following study adds weight to the 'transcendence of mind';
Study suggests precognition may be possible - November 2010
"Thought precedes action as lightning precedes thunder."
Heinrich Heine - in the year 1834
And though it is not possible to localize memories (information) inside the brain, it is interesting to note how extremely complex the brain is in its ability to manipulate rudimentary information:
Boggle Your Brain - November 2010
Excerpt: One synapse, by itself, is more like a microprocessor--with both memory-storage and information-processing elements--than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.
This following experiment is really interesting:
Scientific Evidence That Mind Effects Matter - Random Number Generators - video
The Mind Is Not The Brain - Scientific Evidence - Rupert Sheldrake - (Referenced Notes)
Here a Darwinian Psychologist has a moment of honesty facing the 'hard problem' that consciousness presents to materialism;
David Barash - Materialist/Atheist Darwinian Psychologist
Angus Menuge Interviewed by Apologetics 315 - audio interview
Description: Today's interview is with Dr. Angus Menuge, Professor of Philosophy at Concordia University, and author of Agents Under Fire: Materialism and the Rationality of Science. He talks about his background and work, the philosophy of mind, what reason (or reasoning) is, what materialism is as a worldview, things excluded from a materialistic worldview, methodological naturalism and materialism, accounting for free will, materialistic accounts of reason, the epistemological argument from reason, the ontological argument from reason, finding the best explanation for reason, problems with methodological naturalism, implications of materialism, practical application of the argument from reason, advice for apologists, the International Academy of Apologetics, and more.
Genesis 2:7
I find it very interesting that the materialistic belief of the universe being stable, and infinite in duration, was so deeply rooted in scientific thought that Albert Einstein (1879-1955), when he was shown his general relativity equation indicated a universe that was unstable and would 'draw together' under its own gravity, added a cosmological constant to his equation to reflect a stable universe rather than entertain the thought that the universe had a beginning.
Einstein and The Belgian Priest, George Lemaitre - The "Father" Of The Big Bang Theory - video
The Universe Had a Definite Beginning - Einstein and Edwin Hubble - Stephen Meyer on the John Ankerberg show - video
of note: This was not the last time Einstein's base materialistic philosophy had severely misled him. He was also severely misled in the Bohr–Einstein debates in which he was repeatedly proven wrong in challenging the 'spooky action at a distance' postulations of the emerging field of quantum mechanics. This following video, which I listed earlier, bears worth repeating since it highlights the Bohr/Einstein debate and the decades long struggle to 'scientifically' resolve the disagreement between them:
The Failure Of Local Realism or Reductive Materialism - Alain Aspect - video
The following is an interesting exchange between Bohr and Einstein:
God does not play dice with the cosmos.
Albert Einstein
In response Niels Bohr said,
Do not presume to tell God what to do.
Though many words could be written on the deep underlying philosophical issues of that exchange between Bohr and Einstein, my take on the whole matter is summed up nicely, and simply, in the following verse and video:
Proverbs 16:33
Chance vs. God: The Battle of First Causes – John MacArthur - 10 minute audio
When astronomer Edwin Hubble published empirical evidence indicating a beginning for the universe, Einstein ended up calling the cosmological constant, he had added to his equation, the biggest blunder of his life. But then again mathematically speaking, Einstein's 'fudge factor' was not so much of a blunder after all. In the 1990's a highly modified cosmological constant, representing the elusive 'Dark Energy' to account for the accelerated expansion of the universe, was reintroduced into general relativity equations to explain the discrepancy between the ages of the oldest stars in the Milky Way galaxy and the age of the universe. Far from providing a materialistic solution, which would have enabled the universe to be stable and infinite as Einstein had originally envisioned, the finely-tuned cosmological constant, finely-tuned to 1 part in 10^120, has turned into one of the most powerful evidences of design from many finely-tuned universal constants of the universe. Universal, and transcendent, constants that seem to have no other apparent reason for being at precise values than to enable carbon-based life to be possible in this universe. These transcendent universal constants dramatically demonstrate the need for a infinitely powerful transcendent Creator, to account for the fact the universe is apparently the stunning work of a master craftsman who had carbon-based life in mind as His final goal. If the avalanche of incoming scientific evidence keeps going in the same direction as it has been going for the last century, and there is no hint the evidence will change directions, human beings, warts and all, could once again be popularly viewed as God's ultimate purpose for creating this universe. Man and the earth beneath his feet could very well be looked at as the 'center of the universe' by both scientists and theologians.
Genesis 1:26-27
Then God said, "Let us make man in Our image, according to Our image, according to Our likeness; let them have dominion ..."
God Of Wonders - City On A Hill - music video
A straight-forward interpretation of the anthropic hypothesis is simple in its proposition. It proposes the entire universe, in all its grandeur, was purposely created by an infinitely powerful transcendent Creator specifically with human beings in mind as the end result. Therefore a strict interpretation of the anthropic hypothesis would propose that each level of the universe's development towards man may reflect the handiwork of such a Creator. Here are some resources reflecting that approach:
"Creation as Science" - Hugh Ross - A Testable Creation Model - video
Is There Scientific Evidence for the Existence of God? - Walter Bradley
Creation of the Cosmos - Walter Bradley - video
God Is Not Dead Yet - William Lane Craig - The Revival of Theism In Philosophy since the 1960's
William Lane Craig lecture on Richard Dawkins book 'The God Delusion' - video
The investigative tool for the hypothesis is this: all the universe's 'links of chain' to the appearance of man may be deduced as 'intelligently designed' with what is termed 'irreducible complexity'. The term 'irreducible complexity' was coined in molecular biology by biochemist Michael Behe PhD. (1952-present) in his book 'Darwin's Black Box'. Irreducible complexity is best understood by comparison. It is similar to saying each major part of a finely made Swiss watch is necessary for the watch to operate. Take away any part and the watch will fail to operate. Though individual parts of the watch, or even a watch itself, may have some other purpose in some other system, the principle of integration for a specific singular purpose is a very anti-Darwinian concept that steadfastly resists materialistic explanation. In molecular biology the best known example for irreducible complexity, and thus for Intelligent Design, has become the bacterial flagellum.
Bacterial Flagellum - A Sheer Wonder Of Intelligent Design - video
Bacterial Flagellum: Visualizing the Complete Machine In Situ
Electron Microscope Photograph of Flagellum Hook-Basal Body
Engineering at Its Finest: Bacterial Chemotaxis and Signal Transduction - JonathanM - September 2011
Excerpt: The bacterial flagellum represents not just a problem of irreducible complexity. Rather, the problem extends far deeper than that. What we are now observing is the existence of irreducibly complex systems within irreducibly complex systems. How random mutations, coupled with natural selection, could have assembled such a finely set-up system is a question to which I defy any Darwinist to give a sensible answer.
Biologist Howard Berg at Harvard calls the Bacterial Flagellum
“the most efficient machine in the universe."
The flagellum has steadfastly resisted all attempts to elucidate its plausible origination by Darwinian processes, much less has anyone ever actually evolved a flagellum from scratch in the laboratory;
Genetic Entropy Refutation of Nick Matzke's TTSS (type III secretion system) to Flagellum Evolutionary Narrative:
Excerpt: Comparative genomic analysis show that flagellar genes have been differentially lost in endosymbiotic bacteria of insects. Only proteins involved in protein export within the flagella assembly pathway (type III secretion system and the basal-body) have been kept...
Excerpt: I am convinced that the T3SS is almost certainly younger than the flagellum. If one aligns the amino acid sequences of the flagellar proteins (that have homologous counterparts in the T3SS), and if one also aligns the amino acid sequences of the T3SS proteins, one finds that the T3SS protein amino acid sequences are much more conserved than the amino acid sequences of the flagellar proteins.,,, - LivingstoneMorford - experimental scientist - UD blogger
Stephen Meyer - T3SS Derived From Bacterial Flagellum (Successful ID Prediction) - video
Phylogenetic Analyses of the Constituents of Type III Protein Secretion Systems
Excerpt: We suggest that the flagellar apparatus was the evolutionary precursor of Type III protein secretion systems.
Peer-Reviewed Paper Investigating Origin of Information Endorses Irreducible Complexity and Intelligent Design - A.C. McIntosh per Casey Luskin - July 2010
Excerpt: many think that that debate has been settled by the work of Pallen and Matzke where an attempt to explain the origin of the bacterial flagellum rotary motor as a development of the Type 3 secretory system has been made. However, this argument is not robust simply because it is evident that there are features of both mechanisms which are clearly not within the genetic framework of the other.
Presenting the Positive Case for Design - Casey Luskin - February 14, 2012
Excerpt: If you think of the flagellum like an outboard motor, and the T3SS like a squirt gun, the parts they share are the ones that allow them to be mounted on the bracket of a boat. But the parts that give them their distinct functions -- propulsion or injection -- are not shared. I said that thinking you can explain the flagellum simply by referring me to the T3SS is like saying if you can account for the origin of the mounting-bracket on the back of you boat, then you've explained the origin of the motor too -- which obviously makes no sense.
"One fact in favour of the flagellum-first view is that bacteria would have needed propulsion before they needed T3SSs, which are used to attack cells that evolved later than bacteria. Also, flagella are found in a more diverse range of bacterial species than T3SSs. ‘The most parsimonious explanation is that the T3SS arose later," Howard Ochman - Biochemist - New Scientist (Feb 16, 2008)
Michael Behe on Falsifying Intelligent Design - video
Genetic analysis of coordinate flagellar and type III - Scott Minnich and Stephen Meyer
Michael Behe Hasn't Been Refuted on the Flagellum - March 2011
Bacterial Flagella: A Paradigm for Design – Scott Minnich – Video
Ken Miller's Inaccurate and Biased Evolution Curriculum - Casey Luskin - 2011
Excerpt: One mutation, one part knock out, it can't swim. Put that single gene back in we restore motility. ... knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We've done that with all 35 components of the flagellum, and we get the same effect. - Scott Minnich
Flagellum - Sean D. Pitman, M.D.
The Bacterial Flagellum – Truly An Engineering Marvel! - December 2010
Hiroyuki Matsuura, Nobuo Noda, Kazuharu Koide Tetsuya Nemoto and Yasumi Ito
Excerpt from bottom page 7: Note that the physical principle of flagella motor does not belong to classical mechanics, but to quantum mechanics. When we can consider applying quantum physics to flagella motor, we can find out the shift of energetic state and coherent state.
The manner in which bacteria with flagella move is also very interesting;
Structures and Mechanisms of Bacterial Motility - Marty Player
Excerpt: motile bacteria move in a random running and tumbling pattern when in an isotonic solution. While this type of movement may be totally random in some situations, in others motile bacteria bias this random walk.
Towards the end of the following video is a excellent animation of the 'running and tumbling' motion of bacteria with flagella;
Animations from E O Wilson’s Lord of the Ants – video
"A Bit Unprepossessing": Plantinga on the Logic of Dawkins's Blind Watchmaker - Jay W. Richards February 9, 2012
Excerpt: what Dawkins has in mind is something like this: If it's unlikely that a bacterial flagellum could have arisen by chance or the Darwinian mechanism, then any agent that designed the flagellum would be even less likely.
Plantinga finds a fatal problem here. Dawkins defines complexity as the property of something that has parts "arranged in a way that is unlikely to have arisen by chance alone." But God is immaterial and so doesn't have parts in this sense. According to Dawkins's own definition of complexity, therefore, God is not complex. One can make a similar point without invoking God. It doesn't follow that because an agent can produce organized complexity, that the agent is complex. (Frankly, I don't think it makes sense to refer to any agent as "complex.") Organized complexity might very well be a reliable sign of an intelligent agent. So Dawkins's argument against the improbability of God's existence, and, a fortiori, the improbability of intelligent design, fails.
As well, it has now been demonstrated that the specific sequence complexity, of a functional protein, can be mathematically quantified as functional information bits(Fits).
Functional information and the emergence of bio-complexity:
Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions.
Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video
Entire video:
and this paper:
Here is a brief discussion on a plausible way to more precisely measure the complete information content of a cell as well as measuring Landauer's principle in a cell:
It is interesting to note that many evolutionists are very evasive if questioned by someone to precisely define functional information. In fact I've seen some die-hard evolutionists deny that information even exists in a cell. Many times evolutionists will try to say information is generated using Claude Shannon's broad definition of information, since 'non-functional' information bits may be considered information in his broad definition of information, yet, when looked at carefully, Shannon information completely fails to explain the generation of functional information.
Mutations, epigenetics and the question of information
Testable hypotheses about FSC
What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:
Null hypothesis #1
Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.
Null hypothesis #2
Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function.
Null hypothesis #3
Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function.
Null hypothesis #4
Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.
The following site has a fairly concise definition for functional information (dFSCI; digital functionally specified complex information) by a blogger called gpuccio:
As well it is found that Claude Shannon's work on 'communication of information' actually fully supports Intelligent Design as is illustrated in the following video and article:
Shannon Information - Channel Capacity - Perry Marshall - video
Skeptic's Objection to Information Theory #1:
"DNA is Not a Code"
As well, William Dembski and Robert Marks have shown that the information found in life can be measured. And since the information can be measured it can be used to falsify Darwinian evolution:
"LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information":
William Dembski Is Interviewed By Casey Luskin About Conservation Of Information - Audio
Dr. Dembski has emphasized that the Law of Conservation of Information (LCI) is clearly differentiated from the common definition of Theistic Evolution since mainstream Theistic evolutionists, such as Ken Miller and Francis Collins, hold that the Design/Information found in life is not separable from the purely material processes of the universe, whereas Dembski and Marks are clearly saying the Design/Information found in life is detectable, can be separated from the material processes we see in the universe, and "can be measured in precise information-theoretic terms". In other words, the Dembski-Marks paper shows in order for gradual evolution to actually be true it cannot be random Darwinian evolution and that a 'Intelligent Designer' will have to somehow provide the additional functional information needed to make gradual evolution of increased functional complexity possible. Thus now the theoretical underpinnings, of random functional information generation by material processes, are completely removed from Darwinian ideology.
Yet even though God could very well have created life gradually, did God use gradual processes to create life on Earth? I don't think so. There are many solid lines of evidence pointing to the fact that the principle of Genetic Entropy (loss of functional information) is the true principle for all biological adaptations and that no gradual 'material processes' are involved in the "evolution" a lifeform, to greater heights of functional complexity, once God has created a Parent Kind/Species. This following site has a general outline of the evidence that argues forcefully against the gradual model of Theistic evolutionists:
Why Secular and Theistic Darwinists Fear ID - September 2010
The main problem, for the secular model of neo-Darwinian evolution to overcome, is that no one has ever seen purely material processes generate functional 'prescriptive' information.
Can We Falsify Any Of The Following Null Hypothesis (For Information Generation)
1) Mathematical Logic
2) Algorithmic Optimization
3) Cybernetic Programming
4) Computational Halting
5) Integrated Circuits
6) Organization (e.g. homeostatic optimization far from equilibrium)
7) Material Symbol Systems (e.g. genetics)
8) Any Goal Oriented bona fide system
9) Language
10) Formal function of any kind
11) Utilitarian work
Is Life Unique? David L. Abel - January 2012
Concluding Statement: The scientific method itself cannot be reduced to mass and energy. Neither can language, translation, coding and decoding, mathematics, logic theory, programming, symbol systems, the integration of circuits, computation, categorizations, results tabulation, the drawing and discussion of conclusions. The prevailing Kuhnian paradigm rut of philosophic physicalism is obstructing scientific progress, biology in particular. There is more to life than chemistry. All known life is cybernetic. Control is choice-contingent and formal, not physicodynamic.
"Nonphysical formalism not only describes, but preceded physicality and the Big Bang
Formalism prescribed, organized and continues to govern physicodynamics."
The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010
Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.”
The Law of Physicodynamic Incompleteness - David L. Abel - August 2011
Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”
Programming of Life - Information - Shannon, Functional & Prescriptive - video
While neo-Darwinian evolution has no evidence that material processes can generate functional prescriptive information, Intelligent Design does have 'proof of principle' that information can 'locally' violate the second law and generate potential energy:
Maxwell's demon demonstration turns information into energy - November 2010
Excerpt: Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a "spiral-staircase-like" potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
How Could God Interact with the World? (William Dembski)
After much reading, research, and debate with evolutionists, I find the principle of Genetic Entropy (loss of functional information) to be the true principle for all 'beneficial' biological adaptations which directly contradicts unguided neo-Darwinian evolution. As well, unlike Darwinian evolution which can claim no primary principles in science to rest its claim on for the generation of functional information, Genetic Entropy can rest its foundation in science directly on the twin pillars of the Second Law of Thermodynamics and on the Law of Conservation Of Information(LCI; Dembski,Marks)(Null Hypothesis;Abel). The first phase of Genetic Entropy, any life-form will go through, holds all sub-speciation adaptations away from a parent species, which increase fitness/survivability to a new environment for the sub-species, will always come at a cost of the functional information that is already present in the parent species genome. This is, for the vast majority of times, measurable as loss of genetic diversity in genomes. This phase of Genetic Entropy is verified, in one line of evidence, by the fact all population genetics' studies show a consistent loss of genetic diversity from a parent species for all sub-species that have adapted away (Maciej Giertych). This fact is also well testified to by plant and animal breeders who know there are strict limits to the amount of variability you can expect when breeding for any particular genetic trait. The second line of evidence, this primary phase of the principle of Genetic Entropy is being rigorously obeyed, is found in the fact the 'Fitness Test' against a parent species of bacteria has never been violated by any sub-species of a parent bacteria.
Testing Evolution in the Lab With Biologic Institute's Ann Gauger - podcast with link to peer-reviewed paper
Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution. Dr. Gauger's paper, "Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,".
For a broad outline of the 'Fitness test', required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles:
Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video
This following study demonstrated that bacteria which had gained antibiotic resistance by mutation are less fit than wild type bacteria::
Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008
Excerpt: Therefore, in order to simulate competition in the wild, bacteria must be grown on minimal media. Minimal media mimics better what bacteria experience in a natural environment over a period of time. This is the place where fitness can be accurately assessed. Given a rich media, they grow about the same.
Also of note; there appears to be a in-built (designed) mechanism, which kicks in during starvation, which allows wild type bacteria to more robustly resist antibiotics than 'well fed' bacteria;
Starving bacteria fight antibiotics harder? - November 2011
Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology
Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution.
A Tale of Two Falsifications of Evolution - September 2011
Antibiotic resistance is ancient - September 2011
Evolution - Tested And Falsified - Don Patton - video
List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria:
The following study surveys four decades of experimental work, and solidly backs up the preceding conclusion that there has never been an observed violation of genetic entropy:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010
Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit')
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010
Where's the substantiating evidence for neo-Darwinism?
The previously listed 'fitness test', and paper by Dr. Behe, fairly conclusively demonstrates 'optimal information' was originally encoded within a parent bacteria/bacterium by God, and has not been added to by any 'teleological' methods in the beneficial adaptations of the sub-species of bacteria. Thus the inference to Genetic Entropy, i.e. that God has not specifically moved within nature in a teleological manner, to gradually increase the functional information of a genome, still holds as true for the principle of Genetic Entropy.
It seems readily apparent to me that to conclusively demonstrate God has moved within nature, in a teleological manner, to provide the sub-species bacteria with additional functional information over the 'optimal' genome of its parent species, then the fitness test must be passed by the sub-species against the parent species. If the fitness test is shown to be passed then the new molecular function, which provides the more robust survivability for the sub-species, must be calculated to its additional Functional Information Bits (Fits) it has gained in the beneficial adaptation, and then be found to be greater than 140 Fits. 140 Fits is what has now been generously set by Kirk Durston as the maximum limit of Functional Information which can reasonably be expected to be generated by the natural processes of the universe over the entire age of the universe (The actual limit is most likely to be around 40 Fits)(Of note: I have not seen any evidence to suggest that purely material processes can exceed the much more constrained '2 protein-protein binding site limit', for functional information/complexity generation, found by Michael Behe in his book "The Edge Of Evolution"). This fitness test, and calculation, must be done to rigorously establish materialistic processes did not generate the functional information (Fits), and to rigorously establish that teleological processes were indeed involved in the increase of Functional Complexity of the beneficially adapted sub-species. The second and final phase of Genetic Entropy, outlined by John Sanford in his book Genetic Entropy & the Mystery of the Genome, is when 'slightly detrimental' mutations, which are far below the power of natural selection to remove from a genome, slowly build up in a species/kind over long periods of time and lead to Genetic Meltdown.
Evolution Vs Genetic Entropy - Andy McIntosh - video
The first effect to be obviously noticed in the evidence, for the Genetic Entropy principle, is the loss of potential for morphological variability of individual sub-species of a kind. This loss of potential for morphological variability first takes place for the extended lineages of sub-species within a kind, and increases with time, and then gradually works in to the more ancient lineages of the kind, as the 'mutational load' of slightly detrimental mutations slowly builds up over time. This following paper, though of evolutionary bent, offers a classic example of the effects of Genetic Entropy over deep time of millions of years:
A Cambrian Peak in Morphological Variation Within Trilobite Species; Webster
Excerpt: The distribution of polymorphic traits in cladistic character-taxon matrices reveals that the frequency and extent of morphological variation in 982 trilobite species are greatest early in the evolution of the group: Stratigraphically old and/or phylogenetically basal taxa are significantly more variable than younger and/or more derived taxa.
The final effect of Genetic Entropy is when the entire spectrum of the species of a kind slowly start to succumb to 'Genetic Meltdown', and to go extinct in the fossil record. The occurs because the mutational load, of the slowly accumulating 'slightly detrimental mutations' in the genomes, becomes too great for each individual species of the kind to bear. From repeated radiations from ancient lineages in the fossil record, and from current adaptive radiation studies which show strong favor for ancient lineages radiating, the ancient lineages of a kind appear to have the most 'robust genomes' and are thus most resistant to Genetic Meltdown. All this consistent evidence makes perfect sense from the Genetic Entropy standpoint, in that Genetic Entropy holds God created each parent kind with a optimal genome for all future sub-speciation events. My overwhelming intuition, from all the evidence I've seen so far, and from Theology, is this; Once God creates a parent kind, the parent kind is encoded with optimal information for the specific purpose to which God has created the kind to exist, and God has chosen, in His infinite wisdom, to strictly limit the extent to which He will act within nature to 'evolve' the sub-species of the parent kind to greater heights of functional complexity. Thus the Biblically compatible principle of Genetic Entropy is found to be in harmony with the second law of thermodynamics and with the strict limit found for material processes ever generating any meaningful amount of functional information on their own (LCI: Dembski - Marks)(Abel; Null Hypothesis).
As a side light to this, it should be clearly pointed out that we know, for 100% certainty, that Intelligence can generate functional information i.e. irreducible complexity. We generate a large amount of functional information, which is well beyond the reach of the random processes of the universe, every time we write a single paragraph of a letter (+700 Fits average). The true question we should be asking is this, "Can totally natural processes ever generate functional information?", especially since totally natural processes have never been observed generating any functional information whatsoever from scratch (Kirk Durston). This following short video lays out the completely legitimate scientific basis for inferring Intelligent Design from what we presently observe:
Stephen Meyer: What is the origin of the digital information found in DNA? - short video
As well, 'pure transcendent information' is now shown to be 'conserved'. (i.e. it is shown that all transcendent information which can possibly exist, for all possible physical/material events, past, present, and future, already must exist.) This is since transcendent information exercises direct dominion of the foundational 'material' entity of this universe, energy, which cannot be created or destroyed by any known 'material' means. i.e. First Law of Thermodynamics.
Conservation Of Transcendent/Quantum Information - 2007 - video
This following experiment verified the 'conservation of transcendent/quantum information' using a far more rigorous approach;
Quantum no-hiding theorem experimentally confirmed for first time
These following studies verified the violation of the first law of thermodynamics that I had suspected in the preceding 2007 video:
How Teleportation Will Work -
Quantum Teleportation - IBM Research Page
Researchers Succeed in Quantum Teleportation of Light Waves - April 2011
Unconditional Quantum Teleportation - abstract
Of note: conclusive evidence for the violation of the First Law of Thermodynamics is firmly found in the preceding experiment when coupled with the complete displacement of the infinite transcendent information of "Photon c":
In extension to the 2007 video, the following video and article shows quantum teleportation breakthroughs have actually shed a little light on exactly what, or more precisely on exactly Whom, has created this universe:
It is also very interesting to note that the quantum state of a photon is actually defined as 'infinite information' in its uncollapsed quantum wave state:
Quantum Computing - Stanford Encyclopedia
It should be noted in the preceding paper that Duwell, though he never challenges the mathematical definition of a photon qubit as infinite information, tries to refute Bennett's interpretation of infinite information transfer in teleportation because of what he believes are 'time constraints' which would prohibit teleporting 'backwards in time'. Yet Duwell fails to realize that information is its own completely unique transcendent entity, completely separate from any energy-matter, space-time, constraints in the first place.
This following recent paper and experiments, on top of the previously listed 'conservation of quantum information' papers, pretty much blew a hole in Duwell's objection to Bennett, of teleporting infinite information 'backwards in time', simply because he believed there was no such path, or mechanism, to do so:
Time travel theory avoids grandfather paradox - July 2010
Excerpt: “In the new paper, the scientists explore a particular version of CTCs based on combining quantum teleportation with post-selection, resulting in a theory of post-selected CTCs (P-CTCs). ,,,The formalism of P-CTCs shows that such quantum time travel can be thought of as a kind of quantum tunneling backwards in time, which can take place even in the absence of a classical path from future to past,,, “P-CTCs might also allow time travel in spacetimes without general-relativistic closed timelike curves,” they conclude. “If nature somehow provides the nonlinear dynamics afforded by final-state projection, then it is possible for particles (and, in principle, people) to tunnel from the future to the past.”
Physicists describe method to observe timelike entanglement - January 2011
It should also be noted that the preceding experiments pretty much dots all the i's and crosses all the t's as far as concretely establishing 'transcendent information' as its own unique entity. Its own unique entity that is completely separate from, and dominate of, space-time, matter and energy.
The following excerpt is also of interest to this issue of time constraints in quantum mechanics:
Solving the quantum mysteries - John Gribbin
Excerpt: As all physicists learn at university (and most promptly forget) the full version of the wave equation has two sets of solutions -- one corresponding to the familiar simple Schrödinger equation, and the other to a kind of mirror image Schrödinger equation describing the flow of negative energy into the past.
As well, I also have another reason to object to Duwell's complaint of 'no mechanism' for information travel to the past, in that I firmly believe Biblical prophecy has actually been precisely fulfilled by Israel's 'miraculous' rebirth as a nation in 1948, as this following video makes clear:
The Precisely Fulfilled Prophecy Of Israel Becoming A Nation In 1948 - video
This following video shows one reason why I personally know there is much more going on in the world than what the materialistic philosophy would lead us to believe:
Miracle Testimony - One Easter Sunday Sunrise Service - video
More supporting evidence for the transcendent nature of information, and how it interacts with energy, is found in these following studies:
Single photons to soak up data:
Ultra-Dense Optical Storage - on One Photon
Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image’s worth of data into a photon, slow the image down for storage, and then retrieve the image intact.,,, Quantum mechanics dictates some strange things at that scale, so that bit of light could be thought of as both a particle and a wave. As a wave, it passed through all parts of the stencil at once, carrying the "shadow" of the UR with it.
This following experiment clearly shows information is not an 'emergent property' of any solid material basis as is dogmatically asserted by some materialists:
Converting Quantum Bits: Physicists Transfer Information Between Matter and Light
Atom takes a quantum leap - 2009
This following paper is fairly good for establishing the primacy of transcendent information in the 'reality' of this universe:
What is Truth?
It is also interesting to note that a Compact Disc crammed with information on it weighs exactly the same as a CD with no information on it whatsoever.,, Here are a few videos reflecting on some of the characteristics of transcendent information:
Information – Elusive but Tangible – video
Information? What Is It Really? Professor Andy McIntosh - video
Information? - What is it really? Brief Discussion on the Quantum view of information:
But to reflect just a bit more on the teleportation experiment itself, is interesting to note that scientists can only 'destroy' a photon in these quantum teleportation experiments. No one has 'created' a photon as of yet. I firmly believe man shall never do as such, since I hold only God is infinite, and perfect, in information/knowledge.
Job 38:19-20
Further reflection on the quantum teleportation experiment:
That a photon would actually be destroyed upon the teleportation (separation) of its 'infinite' information to another photon is a direct controlled violation of the first law of thermodynamics. (i.e. a photon 'disappeared' from the 'material' universe when the entire information content of a photon was 'transcendently displaced' from the material universe by the experiment, when photon “c” transcendently became transmitted photon “a”). Thus, Quantum teleportation is direct empirical validation for the primary tenet of the Law of Conservation of Information (i.e. 'transcendent' information cannot be created or destroyed). This conclusion is warranted because information exercises direct dominion of energy, telling energy exactly what to be and do in the experiment. Thus, this experiment provides a direct line of logic that transcendent information cannot be created or destroyed and, in information demonstrating transcendence, and dominion, of space-time and matter-energy, becomes the only known entity that can satisfactorily explain where all energy came from as far as the origination of the universe is concerned. That is transcendent information is the only known entity which can explain where all the energy came from in the Big Bang without leaving the bounds of empirical science as the postulated multiverse does. Clearly anything that exercises dominion of the fundamental entity of this physical universe, a photon of energy, as transcendent information does in teleportation, must of necessity possess the same, as well as greater, qualities as energy does possess in the first law of thermodynamics (i.e. Energy cannot be created or destroyed by any known material means according to the first law). To reiterate, since information exercises dominion of energy in quantum teleportation then all information that can exist, for all past, present and future events of energy, already must exist.
As well, the fact that quantum teleportation shows an exact 'location dominion', of a photon of energy by 'specified infinite information', satisfies a major requirement for the entity needed to explain the missing Dark Matter. The needed transcendent explanation would have to dominate energy in a very similar 'specified location' fashion, as is demonstrated by the infinite information of quantum teleportation, to satisfy what is needed to explain the missing dark matter.
Colossians 1:17
Moreover, the fact that simple quantum entanglement shows 'coordinated universal control' of entangled photons of energy, by transcendent information, regardless of distance, satisfies a major requirement for the entity which must explain the missing Dark Energy. i.e. The transcendent entity, needed to explain Dark Energy, must explain why the entire space of the universe is expanding in such a finely-tuned, coordinated, degree, and would have to employ a mechanism of control very similar to what we witness in the quantum entanglement experiment.
Job 9:8
He stretches out the heavens by Himself and walks on the waves of the sea.
Thus 'infinite transcendent information' provides a coherent picture of overarching universal control, and specificity, that could possibly unify gravity with the other forces. It very well may be possible to elucidate, mathematically, the overall pattern God has chosen to implement infinite information in this universe. The following article backs up this assertion:
Is Unknown Force In Universe Acting On Dark Matter?
Excerpt: Is Unknown Force In Universe Acting On Dark Matter?
Excerpt: It is possible that a non-gravitational fifth force is ruling the dark matter with an invisible hand, leaving the same fingerprints on all galaxies, irrespective of their ages, shapes and sizes.” ,,Dr Famaey added, “If we account for our observations with a modified law of gravity, it makes perfect sense to replace the effective action of hypothetical dark matter with a force closely related to the distribution of visible matter.”
Dark Matter Halos of Disk Galaxies
Excerpt: Dark matter’s properties can only be inferred indirectly by observing the motions of the stars and gas (of a galaxy).
"I discovered that nature was constructed in a wonderful way, and our task is to find out its mathematical structure"
Albert Einstein - The Einstein Factor - Reader's Digest
Special Relativity - Time Dilation and Length Contraction - video
Moreover time, as we understand it, would come to a complete stop at the speed of light. To grasp the whole 'time coming to a complete stop at the speed of light' concept a little more easily, imagine moving away from the face of a clock at the speed of light. Would not the hands on the clock stay stationary as you moved away from the face of the clock at the speed of light? Moving away from the face of a clock at the speed of light happens to be the same 'thought experiment' that gave Einstein his breakthrough insight into e=mc2.
Albert Einstein - Special Relativity - Insight Into Eternity - 'thought experiment' video
Light and Quantum Entanglement Reflect Some Characteristics Of God - video
"I've just developed a new theory of eternity."
Albert Einstein - The Einstein Factor - Reader's Digest
Richard Swenson - More Than Meets The Eye, Chpt. 12
Experimental confirmation of Time Dilation
It is also very interesting to note that we have two very different qualities of ‘eternality of time’ revealed by our time dilation experiments;
Time Dilation - General and Special Relativity - Chuck Missler - video
Time dilation
Excerpt: Time dilation: special vs. general theories of relativity:
1. --In special relativity (or, hypothetically far from all gravitational mass), clocks that are moving with respect to an inertial system of observation are measured to be running slower. (i.e. For any observer accelerating, hypothetically, to the speed of light, time, as we understand it, will come to a complete stop).
i.e. As with any observer accelerating to the speed of light, it is found that for any observer falling into the event horizon of a black hole, that time, as we understand it, will come to a complete stop for them. — But of particular interest to the ‘eternal framework’ found for General Relativity at black holes;… It is interesting to note that entropic decay (Randomness), which is the primary reason why things grow old and eventually die in this universe, is found to be greatest at black holes. Thus the ‘eternality of time’ at black holes can rightly be called ‘eternalities of decay and/or eternalities of destruction’.
Entropy of the Universe - Hugh Ross - May 2010
Roger Penrose – How Special Was The Big Bang?
i.e. Black Holes are found to be ‘timeless’ singularities of destruction and disorder rather than singularities of creation and order such as the extreme order we see at the creation event of the Big Bang. Needless to say, the implications of this ‘eternality of destruction’ should be fairly disturbing for those of us who are of the ‘spiritually minded' persuasion!
Matthew 10:28
On the Mystery, and Plasticity, Of Space-Time
Space-Time and Our Place In It
It is very interesting to note that this strange higher dimensional, eternal, framework for time, found in special relativity, and general relativity, finds corroboration in Near Death Experience testimonies:
Mickey Robinson - Near Death Experience testimony
Dr. Ken Ring - has extensively studied Near Death Experiences
'Earthly time has no meaning in the spirit realm. There is no concept of before or after. Everything - past, present, future - exists simultaneously.' - Kimberly Clark Sharp - NDE Experiencer
'There is no way to tell whether minutes, hours or years go by. Existence is the only reality and it is inseparable from the eternal now.' - John Star - NDE Experiencer
What Will Heaven be Like? by Rich Deem
Excerpt: Since heaven is where God lives, it must contain more physical and temporal dimensions than those found in this physical universe that God created. We cannot imagine, nor can we experience in our current bodies, what these extra dimensions might be like.
It is also very interesting to point out that the 'light at the end of the tunnel', reported in many Near Death Experiences(NDEs), is also corroborated by Special Relativity when considering the optical effects for traveling at the speed of light. Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a 'hypothetical' observer moves towards the ‘higher dimension’ of the speed of light, with the ‘light at the end of the tunnel’ reported in very many Near Death Experiences: (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.)
Traveling At The Speed Of Light - Optical Effects - video
Here is the interactive website, with link to the relativistic math at the bottom of the page, related to the preceding video;
Seeing Relativity
The NDE and the Tunnel - Kevin Williams' research conclusions
Near Death Experience - The Tunnel - video
Near Death Experience – The Tunnel, The Light, The Life Review – video
As well, as with the tunnel being present in heavenly NDE's, we also have mention of tunnels in hellish NDE testimonies;
A man, near the beginning of this video, gives testimony of falling down a 'tunnel' in the transition stage from this world to hell:
Hell - A Warning! - video
The man, in this following video, also speaks of 'tumbling down' a tunnel in his transition stage to hell:
Bill Wiese on Sid Roth – video
As well, as with the scientifically verified tunnel for special relativity, we also have scientific confirmation of extreme ‘tunnel curvature’, within space-time, to a eternal ‘event horizon’ at black holes;
Space-Time of a Black hole
Akiane Kramarik - Child Prodigy -
Artwork homepage - -
Music video -
As a side light to this, leading quantum physicist Anton Zeilinger has followed in John Archibald Wheeler's footsteps (1911-2008) by insisting reality, at its most foundational level, is 'information'.
"It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin."
John Archibald Wheeler
Why the Quantum? It from Bit? A Participatory Universe?
Prof Anton Zeilinger speaks on quantum physics. at UCT - video
Zeilinger's principle
In the beginning was the bit - New Scientist
'Quantum Magic' Without Any 'Spooky Action at a Distance' - June 2011
Quantum Entanglement and Teleportation - Anton Zeilinger - video
It should be noted that the popular science fiction conception of the universe being 'merely' a computer simulation (as in 'Matrix' movies), a conception drawn from the fact that 'material' reality is now shown to be reducible to information, is far too simplistic in its conception:
Quantum Computing Promises New Insights, Not Just Supermachines - Scott Aaronson - December 2011
Excerpt: And yet, even though useful quantum computers might still be decades away, many of their payoffs are already arriving. For example, the mere possibility of quantum computers has all but overthrown a conception of the universe that scientists like Stephen Wolfram have championed. That conception holds that, as in the “Matrix” movies, the universe itself is basically a giant computer, twiddling an array of 1’s and 0’s in essentially the same way any desktop PC does.
Quantum computing has challenged that vision by showing that if “the universe is a computer,” then even at a hard-nosed theoretical level, it’s a vastly more powerful kind of computer than any yet constructed by humankind. Indeed, the only ways to evade that conclusion seem even crazier than quantum computing itself: One would have to overturn quantum mechanics, or else find a fast way to simulate quantum mechanics using today’s computers.
Here are some more interesting videos that also arrive at a 'information basis' for reality from a slightly different perspective:
A Very Unusual Proof for the Existence of God - video - (Collapse of wave function)
You Are Made Of Information - video - (Don't believe me? Any ontology other than information monism leads to self-contradiction)
Continued comments:
The restriction imposed by our physical limitations of us ever accessing complete infinite information to our temporal space-time framework/dimension (Wheeler; Zeilinger) does not detract, in any way, from the primacy and dominion of the infinite transcendent information framework that is now established by the quantum teleportation experiment as the primary reality of our reality. Of note: All of this evidence meshes extremely well with the theistic postulation of God possessing infinite and perfect knowledge. This seems like a fitting place for this following quote and verse:
William Blake
Psalm 19:1-2
As well it should be noted that, counter-intuitive to materialistic thought (and to every kid who has ever taken a math exam), a computer does not consume energy during computation but will only consume energy when information is erased from it. This counter-intuitive fact is formally known as Landauer's Principle. i.e. Erasing information is a thermodynamically irreversible process that increases the entropy of a system. i.e Only irreversible operations consume energy. Reversible computation does not use up energy. Unfortunately the computer will eventually run out of information storage space and must begin to 'irreversibly' erase the information it has previously gathered (Bennett: 1982) and thus a computer must eventually use energy. i.e. A 'material' computer must eventually obey the second law of thermodynamics for its computation.
Landauer's principle
Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008).
It should be noted that Rolf Landauer himself maintained that information in a computer was 'physical'. He held that information in a computer was merely an 'emergent' property of the material basis of a computer, and thus he held that the information programmed into a computer was not really 'real'. Landauer held this 'materialistic' position in spite of a objection from Roger Penrose that information is indeed real and has its own independent existence separate from a computer. Landauer held this 'materialistic' position since he held that 'it takes energy to erase information from a computer therefore information is 'merely physical'. Yet now the validity of that fairly narrowly focused objection from Landauer, to the reality of 'transcendent information' encoded within the computer, has been brought into question. i.e. Landauer's Principle may not be nearly as 'ironclade' as Landauer had originally believed.
Scientists show how to erase information without using energy - January 2011
Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, "Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.", Vaccaro explained.
This following research provides far more solid falsification for Rolf Landauer's contention that information encoded in a computer is merely physical (merely 'emergent' from a material basis) since he believed it always required energy to erase it;
Quantum knowledge cools computers: New understanding of entropy - June 2011
Excerpt: No heat, even a cooling effect;
Further comments:
"Those devices (computers) can yield only approximations to a structure (of information) that has a deep and "computer independent" existence of its own." - Roger Penrose - The Emperor's New Mind - Pg 147
Norbert Weiner - MIT Mathematician - Father of Cybernetics
Yet even without the falsification of Rolf Landauer's contention, that information was merely 'physical', this ability of a computer to 'compute answers', without ever hypothetically consuming energy, is very suggestive that the answers/truth already exist in reality, and in fact, when taken to its logical conclusion, is very suggestive to the postulation of John 1:1 that 'Logos' is ultimately the foundation of our 'material' reality in the first place.
John 1:1-3
(of note: 'Word' in Greek is 'Logos', and is the root word from which we get our word 'Logic')
This strange anomaly between lack of energy consumption and the computation of information appears to hold for the human mind as well.
Appraising the brain's energy budget:
Excerpt: In the average adult human, the brain represents about 2% of the body weight. Remarkably, despite its relatively small size, the brain accounts for about 20% of the oxygen and, hence, calories consumed by the body. This high rate of metabolism is remarkably constant despite widely varying mental and motoric activity. The metabolic activity of the brain is remarkably constant over time.
Excerpt: Although Lennox considered the performance of mental arithmetic as "mental work", it is not immediately apparent what the nature of that work in the physical sense might be if, indeed, there be any. If no work or energy transformation is involved in the process of thought, then it is not surprising that cerebral oxygen consumption is unaltered during mental arithmetic.
The preceding experiments are very unexpected to materialists since materialists hold that 'mind' is merely a 'emergent property' of the physical processes of the material brain.
In further note: Considering computers can't pass this following test for creating new information,,,
"... no operation performed by a computer can create new information."
-- Douglas G. Robertson, "Algorithmic Information Theory, Free Will and the Turing Test," Complexity, Vol.3, #3 Jan/Feb 1999, pp. 25-34.
Evolutionary Informatics - William Dembski & Robert Marks
Excerpt: The principal theme of the lab’s research is teasing apart the respective roles of internally generated and externally applied information in the performance of evolutionary systems.,,, Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality's ability to produce the required information. Evolutionary informatics, while falling squarely within the information sciences, thus points to the need for an ultimate information source qua intelligent designer.
Information. What is it? - Robert Marks - video
Estimating Active Information in Adaptive Mutagenesis
“Computers are no more able to create information than iPods are capable of creating music.”
Robert Marks
,,Whereas humans can fairly easily pass the test for creating new information,,
"So, to sum up: computers can reshuffle specifications and perform any kind of computation implemented in them. They are mechanical, totally bound by the laws of necessity (algorithms), and non conscious. Humans can continuously create new specification, and also perform complex computations like a computer, although usually less efficiently. They can create semantic output, make new unexpected inferences, recognize and define meanings, purposes, feelings, and functions, and certainly conscious representations are associated with all those kinds of processes."
Uncommon Descent blogger - gpuccio
,,,thus these findings strongly imply that we humans have a 'higher informational component' to our being,, i.e. these findings offer another line of corroborating evidence which is very suggestive to the idea that humans have a mind which is transcendent of the physical brain and which is part of a 'unique soul from God'. Moreover this unique mind that each human has seems to be capable of a special and intimate communion with God that is unavailable to other animals, i.e. we are capable of communicating information with "The Word" as described in John 1:1.
I also liked this insight, from a computer programmer with a PhD in Physics, about a fundamental difference between human consciousness and computer programs:
The simple fact is this, despite years of experience writing many complex codes, I can not write a computer program that disobeys me. I don’t even know how to do it. I can write computer programs that have bugs and don’t perform what I thought they were going to do; I can write computer programs that make pseudo-random choices. I do not know how to write a program that disobeys. I would contend it can’t be done. But the ability to disobey the Creator is the essence of consciousness. Otherwise it’s just complicated programming with random choices.
Also of related interest is Dr. Werner Gitt's lecture on information:
In The Beginning Was Information - Werner Gitt - video
You may download Dr. Gitt's book, In The Beginning Was Information, at this website:
Thus now, with the mathematical definition of functional information in place for molecular biology, and with 'infinite transcendent information' shown to be 'conserved' and 'consciousness' found to be foundational to our reality, and with Genetic Entropy outlined as the primary principle for biological adaptations, Intelligent Design can now be scientifically tested against any materialistic theory of blind chance proposing a certain system arose by random material processes and was not the handiwork of God.
I would like to point out that when a molecular sub-system of a biological organism passes the probability threshold of one in 10^150 orders of magnitude (that’s a one with 150 zeros to the right) then it is considered, by very stringent guidelines which allow for far more 'quantum events' than will ever happen in the universe, to be overwhelmingly impossible for the universe to ever account for the system arising by chance alone. (Dembski, Abel)
Here is how the base line for the universe's probabilistic resources are calculated:
Signature in the Cell - Book Review - Ken Peterson
The Universal Plausibility Metric (UPM) & Principle (UPP) - Abel - Dec. 2009
cΩu = Universe = 10^13 reactions/sec X 10^17 secs X 10^78 atoms = 10^108
cΩg = Galaxy = 10^13 X 10^17 X 10^66 atoms = 10^96
cΩs = Solar System = 10^13 X 10^17 X 10^55 atoms = 10^85
cΩe = Earth = 10^13 X 10^17 X 10^40 atoms = 10^70
Programming of Life - Probability - Defining Probable, Possible, Feasible etc.. - video
New Peer-Reviewed Paper Demolishes Fallacious Objection: “Aren’t There Vast Eons of Time for Evolution?” - Dec. 2009
This 'universal limit' for functional information generation is generously set at 140 Functional Information Bits (Fits) by Kirk Durston. The molecular sub-system is considered to be irreducibly complex in its functional information content, and thus it is considered to be intelligently designed. Though irreducible complexity is primarily used by Intelligent Design proponents for deducing design in molecular biology, the concept of irreducible complexity can also be applied, in an overarching form, to the entire universe to find if man is indeed God's primary purpose for creating this universe. Thus irreducible complexity can also be used to verify the anthropic hypothesis.
Irreducible Complexity and the Anthropic Principle - John Clayton - video
The following are some basic questions that need to be answered, to find if either the anthropic hypothesis or some materialistic hypothesis is correct.
I. What evidence is found for the universe's ability to support life?
II. What evidence is found for the earths ability to support life?
III. What evidence is found for the first life on earth?
IV. What evidence is found for the appearance of all species of life on earth, and is man the last species to appear on earth?
V. What evidence is found for God's personal involvement with man?
Before we start answering these five basic questions, I would like to reiterate, as clearly as possible, that any 'solid material atomic' foundation for this universe, which was the primary postulation of materialism in the first place, has now been completely crushed by our present understanding of quantum mechanics. Little do most people realize there is actually no solid indestructible particle, at all, at the basis of our reality in the atom somewhere. Each and every sub-atomic particle in the atom, (proton, neutron, electron etc..) is subject to the laws of quantum mechanics. Quantum mechanics is about as far away from the solid material particle/atom, that materialism had predicted as the basis of reality, as can be had. These following videos and articles make this point clear:
Uncertainty Principle - The 'Uncertain Non-Particle' Basis Of Material Reality - video and article
Electron diffraction
Excerpt: The de Broglie hypothesis, formulated in 1926, predicts that particles should also behave as waves. De Broglie's formula was confirmed three years later for electrons (which have a rest-mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their work.
As well, many of the actions of the electron blatantly defy out concepts of time and space:
The Electron - The Supernatural Basis of Reality - video
Electron entanglement near a superconductor and bell inequalities
Excerpt: The two electrons of these pairs have entangled spin and orbital degrees of freedom.,,, We formulate Bell-type inequalities in terms of current-current cross-correlations associated with contacts with varying magnetization orientations. We find maximal violation (as in photons) when a superconductor is the particle source. (i.e. electrons have a 'non-local', beyond space and time, cause sustaining them.)
Double-slit experiment
Excerpt: (Though normally done with photons) The double slit experiment can also be performed (using different apparatus) with particles of matter such as electrons with the same results, demonstrating that they also show particle-wave duality.
Quantum Mechanics – Quantum Results, Theoretical Implications Of Quantum Mechanics
Excerpt: Bohr proposed that electrons existed only in certain orbits and that, instead of traveling between orbits, electrons made instantaneous quantum leaps or jumps between allowed orbits.,,, The electron quantum leaps between orbits proposed by the Bohr model accounted for Plank’s observations that atoms emit or absorb electromagnetic radiation in quanta. Bohr’s model also explained many important properties of the photoelectric effect described by Albert Einstein (1879–1955).
"Atoms are not things"
Werner Heisenberg
Niels Bohr
This following article is interesting for it shows that very small quantum events can have dramatic effects on large objects:
How 'spooky' quantum mechanical laws may affect everyday objects (Update) - July 2010
Excerpt: "The difference in size between the two parts of the system is really extreme," Blencowe explained. "To give a sense of perspective, imagine that the 10,000 electrons correspond to something small but macroscopic, like a flea. To complete the analogy, the crystal would have to be the size of Mt. Everest. If we imagine the flea jumping on Mt. Everest to make it move, then the resulting vibrations would be on the order of meters!"
What blows most people away, when they first encounter quantum mechanics, is the quantum foundation of our material reality blatantly defies our concepts of time and space. Most people consider defying time and space to be a 'miraculous & supernatural' event. I know I certainly do! There is certainly nothing within quantum mechanics that precludes miracles from being possible:
How can an Immaterial God Interact with the Physical Universe? (Alvin Plantinga) - video
This 'miraculous & supernatural' foundation for our physical reality can easily be illuminated by the famous 'double slit' experiment. (It should be noted the double slit experiment was originally devised, in 1801, by a Christian polymath named Thomas Young). Though I've listed this preceding video before, it is well worth revisiting it here:
Dr. Quantum - Double Slit Experiment & Entanglement - video
Double-slit experiment
Excerpt: In 1999 objects large enough to see under a microscope, buckyball (interlocking carbon atom) molecules (diameter about 0.7 nm, nearly half a million times that of a proton), were found to exhibit wave-like interference.
This following site offers a more formal refutation of materialism:
Why Quantum Theory Does Not Support Materialism - By Bruce L Gordon:
Excerpt: Because quantum theory is thought to provide the bedrock for our scientific understanding of physical reality, it is to this theory that the materialist inevitably appeals in support of his worldview. But having fled to science in search of a safe haven for his doctrines, the materialist instead finds that quantum theory in fact dissolves and defeats his materialist understanding of the world.
It should also be noted that the 'uncertainty principle' for 3-D material particles is extended even to the point of not even being able to determine the exact radius for an electron that is at complete rest:
PhysForum Science
Excerpt: We have an upper limit on the radius of the electron, set by experiment, but that’s about it. By our current knowledge, it is an elementary particle with no internal structure, and thus no ’size’.”
It would also like to point out that the hardest, most solid, indestructible 'thing' in a material object, such as a rock, are not any of the wave/particles in any of the atoms of a rock, but are the unchanging, transcendent, universal, constants which exercise overriding 'non-chaotic' dominion of all the wave/particle quantum events in the atoms of the rock. i.e. It is the unchanging stability of the universal 'transcendent information' constants, which have not varied one iota from the universe's creation as far as scientists can tell, that allows a rock to be 'rock solid' in the first place.
What is Truth?
Stability of Coulomb Systems in a Magnetic Field - Charles Fefferman
Excerpt of Abstract: I study N electrons and M protons in a magnetic field. It is shown that the total energy per particle is bounded below by a constant independent of M and N, provided the fine structure constant is small. Here, the total energy includes the energy of the magnetic field.
Testing Creation Using the Proton to Electron Mass Ratio
Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,,
As well, it seems fairly obvious the actions observed in the double slit experiment, as well as other experiments, are only possible if our reality has its actual basis in a 'higher transcendent dimension':
Explaining The Unseen Higher Dimension - Dr. Quantum - Flatland - video
These following videos and articles on Dark Energy and Matter put the another nail in the coffin for the materialistic philosophy (as if it was not already completely falsified):
The abstract of the September 2006 Report of the Dark Energy Task Force says: “Dark energy appears to be the dominant component of the physical Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. The acceleration of the Universe is, along with dark matter, the observed phenomenon that most directly demonstrates that our (materialistic) theories of fundamental particles and gravity are either incorrect or incomplete. Most experts believe that nothing short of a revolution in our understanding of fundamental physics will be required to achieve a full understanding of the cosmic acceleration. For these reasons, the nature of dark energy ranks among the very most compelling of all outstanding problems in physical science. These circumstances demand an ambitious observational program to determine the dark energy properties as well as possible.”
The Mathematical Anomaly Of Dark Matter - video
Dark matter halo
Excerpt: The dark matter halo is the single largest part of the Milky Way Galaxy as it covers the space between 100,000 light-years to 300,000 light-years from the galactic center. It is also the most mysterious part of the Galaxy. It is now believed that about 95% of the Galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the Galaxy's matter and energy in any way except through gravity. The dark matter halo is the location of nearly all of the Milky Way Galaxy's dark matter, which is more than ten times as much mass as all of the visible stars, gas, and dust in the rest of the Galaxy.
Gas rich galaxies confirm prediction of modified gravity theory (MOND) - February 2011
Excerpt: Almost everyone agrees that on scales of large galaxy clusters and up, the Universe is well described by dark matter - dark energy theory. However, according to McGaugh this cosmology does not account well for what happens at the scales of galaxies and smaller. "MOND is just the opposite," he said. "It accounts well for the 'small' scale of individual galaxies, but MOND doesn't tell you much about the larger universe.
Hubble Finds Ring of Dark Matter - video
The Elusive "non-Material" Foundation For Gravity:
Study Sheds Light On Dark Energy - video
Hugh Ross PhD. - Scientific Evidence For Dark Energy - video
What The Universe Is Made Of?: - Pie Chart Graph
96% Invisible "Stuff" vs. 4% Visible Material (Of Note: as of 2008 visible matter only accounts for less than .27% of everything that exists in the universe)
Dark Matter:
Despite comprehensive maps of the nearby universe that cover the spectrum from radio to gamma rays, we are only able to account for 10% of the mass that must be out there.(actually it is now known to be only .27%) "It's a fairly embarrassing situation to admit that we can't find 90 percent of the universe." Astronomer Bruce H. Margon
Table 2.1
Inventory of All the Stuff That Makes Up the Universe (Visible vs. Invisible)
Dark Energy 72.1%
Exotic Dark Matter 23.3%
Ordinary Dark Matter 4.35%
Ordinary Bright Matter (Stars) 0.27%
Planets 0.0001%
Invisible portion - Universe 99.73%
Visible portion - Universe .27%
of note: The preceding 'inventory' of the universe is updated to the second and third releases of the Wilkinson Microwave Anisotropy Probe's (WMAP) results in 2006 & 2008; (Why The Universe Is The Way It Is; Hugh Ross; pg. 37)
Now that the materialistic philosophy is thoroughly deprived of any empirical validation, for its primary tenet of a solid particle/atom at the basis of our temporal reality, let's look at the five questions I listed earlier, starting with the first question and working our way to the last.
Romans 1:20
To answer our first question (What evidence is found for the universe's ability to support life?) we will look at the universe and see how its 'parts' are put together. Let's start with carbon. Carbon is shown to be the only element, from the periodic table of elements, by which the complex molecules of life in this universe may be built. The carbon atom is a marvel in and of itself. Carbon is the sixth element on the periodic table and makes up two tenths of one percent of the earths crust. It is the backbone of which all life is built or can be built. It makes up 18% of the mass of our body. In its pure form it is recognized as soot, pencil lead or diamonds. Diamonds are the hardest substance known. Carbon fiber is the strongest fiber known. Carbon fiber is used in the construction of high performance airplanes, tennis rackets and bicycles, just to name a few. Man-made carbon-based molecules have allowed breakthroughs in low temperature super-conductors. Carbon-60 is a recent discovery, from the 1980's, called the buckyball. It is a molecule of sixty interlocking carbon atoms and is the roundest substance known in all molecular science. Graphene, which is a more recent 'revolutionary' discovery within the last decade, is a remarkably flat molecule made of carbon atoms arranged in hexagonal rings, and is the thinnest material possible. Graphene is made of ordinary carbon atoms arranged in a "chicken-wire" lattice. These layers, sometimes just a single atom thick, conduct electricity with virtually no resistance, very little heat generation -- and less power consumption than silicon. Graphene conducts electricity better than any other known material at room temperature and is ten times stronger than steel. Graphene promises to greatly outperform silicon in computer chips in the near future. Carbon has the unique ability to form long chains of complex molecules that have a high degree of stability. Stable complex molecules are required to build sugars, to build DNA, to build RNA, to build amino acids, to build proteins, to build cells, and finally, to build all living organisms on earth. Substances formed around carbon far out-number all other substances combined. No other element comes close to forming the wide variety of stable compounds as does carbon. Yet if it were not for this unique ability to form complex molecules, life could not exist. Organic chemistry, the study of carbon compounds, and their profuse and intricate behavior, is a dedicated science in its own right.
The only element similar to carbon, which has the necessary atomic structure to form the macro (large) molecules needed for life, is silicon. Yet silicon, though having the correct atomic structure, is severely limited in its ability to make complex macro-molecules. Silicon-based molecules are comparatively unstable and sometimes highly reactive. Thus from this, and many other evidences against silicon, carbon is found to be the only element from which life in this universe may be built. Carbon and other 'heavy' elements also provides one, of several, reasons why the universe must be as old and as large as it is. 'Heavy' elements did not form in the Big Bang. Thus, they had to be synthesized in stars and exploded into space before they were available to form a planet on which carbon-based life could exist. Carbon is the first of the 'heavy' elements that is exclusively formed in the interiors of stars. All the elements below carbon were exclusively, or semi-exclusively, formed within the Big Bang of the universe. The delicate balance at which carbon is synthesized in stars is truly a work of art. Fred Hoyle (1915-2001), a famed astrophysicist, is the scientist who established the nucleo-synthesis of heavier elements within stars as mathematically valid in 1946. Years after Sir Fred discovered the stunning precision with which carbon is synthesized in stars he stated:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? ... I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. -
Sir Fred Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.
Sir Fred also stated:
Sir Fred Hoyle - "The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12
Michael Denton - We Are Stardust - Uncanny Balance Of The Elements - Atheist Fred Hoyle's conversion to a Deist/Theist - video
Peer-Reviewed Paper Argues for an Engineered Universe - January 2012 - podcast
God's Creation - The Miracle Of Carbon & Water - video
What could make a scientist who was such a staunch atheist, as Hoyle was before his discoveries, make such a statement? The reason he made such a statement is because Hoyle was expertly trained in the exacting standards of mathematics. He knew numbers cannot lie when correctly used and interpreted. What he found was a staggering numerical balance to the many independent universal constants needed to synthesize carbon in stars. These independent constants were of such a high degree of precision as to leave no room for blind chance whatsoever. Thus, with no wiggle room for the blind chance of materialism, Fred Hoyle had to admit the evidence he found was compelling to the proposition of intelligent design by a infinitely powerful, and transcendent, Creator. Let's look at some of these exacting mathematical standards, and finely tuned universal constants, to see the precision of 'intelligent design' he saw for the math, and for the foundational building blocks of the transcendent universal 'information' constants, for this universe.
Sometimes atheists will appeal to chaos theory to explain how complexity can arise from simplicity. A vivid example of what they are proposing is here:
Mandelbrot Set Zoom - video
Yet in criticism to complexity arising from simplicity, as is naturally assumed in chaos theory, it should be noted that, at the very least, two very different equations are in play here. One equation governs micro actions of the universe, and the other equation governs the macro shape of the universe: The following is the equation that governs 'micro' actions:
Finely Tuned Big Bang, Elvis In The Multiverse, and the Schroedinger Equation - Granville Sewell - audio
At the 4:00 minute mark of the preceding audio, Dr. Sewell comments on the ‘transcendent’ and ‘constant’ Schroedinger’s Equation;
‘In chapter 2, I talk at some length on the Schroedinger Equation which is called the fundamental equation of chemistry. It’s the equation that governs the behavior of the basic atomic particles subject to the basic forces of physics. This equation is a partial differential equation with a complex valued solution. By complex valued I don’t mean complicated, I mean involving solutions that are complex numbers, a+bi, which is extraordinary that the governing equation, basic equation, of physics, of chemistry, is a partial differential equation with complex valued solutions. There is absolutely no reason why the basic particles should obey such a equation that I can think of except that it results in elements and chemical compounds with extremely rich and useful chemical properties. In fact I don’t think anyone familiar with quantum mechanics would believe that we’re ever going to find a reason why it should obey such an equation, they just do! So we have this basic, really elegant mathematical equation, partial differential equation, which is my field of expertise, that governs the most basic particles of nature and there is absolutely no reason why, anyone knows of, why it does, it just does. British physicist Sir James Jeans said “From the intrinsic evidence of His creation, the great architect of the universe begins to appear as a pure mathematician”, so God is a mathematician to’.
i.e. the Materialist is at a complete loss to explain why this should be so, whereas the Christian Theist presupposes such ‘transcendent’ control of our temporal, material, reality,,,
John 1:1
of note; 'the Word' is translated from the Greek word ‘Logos’. Logos happens to be the word from which we derive our modern word ‘Logic’.
The following is the very 'different' equation that is found to govern the 'macro' structure of the universe:
0 = 1 + e ^(i*pi) — Euler
Believe it or not, the five most important numbers in mathematics are tied together, through the complex domain in Euler's number, And that points, ever so subtly but strongly, to a world of reality beyond the immediately physical. Many people resist the implications, but there the compass needle points to a transcendent reality that governs our 3D 'physical' reality.
God by the Numbers - Connecting the constants
Excerpt: The final number comes from theoretical mathematics. It is Euler's (pronounced "Oiler's") number: e*pi*i. This number is equal to -1, so when the formula is written e*pi*i+1 = 0, it connects the five most important constants in mathematics (e, pi, i, 0, and 1) along with three of the most important mathematical operations (addition, multiplication, and exponentiation). These five constants symbolize the four major branches of classical mathematics: arithmetic, represented by 1 and 0; algebra, by i; geometry, by pi; and analysis, by e, the base of the natural log. e*pi*i+1 = 0 has been called "the most famous of all formulas," because, as one textbook says, "It appeals equally to the mystic, the scientist, the philosopher, and the mathematician."
(of note; Euler's Number (equation) is more properly called Euler's Identity in math circles.)
Moreover Euler’s Identity, rather than just being the most enigmatic equation in math, finds striking correlation to how our 3D reality is actually structured,,,
The following picture, Bible verse, and video are very interesting since, with the discovery of the Cosmic Microwave Background Radiation (CMBR), the universe is found to actually be a circular sphere which 'coincidentally' corresponds to the circle of pi within Euler's identity:
Picture of CMBR
Proverbs 8:26-27
The Known Universe by AMNH – video - (please note the 'centrality' of the Earth in the universe in the video)
The flatness of the ‘entire’ universe, which 'coincidentally' corresponds to the diameter of pi in Euler’s identity, is found on this following site; (of note this flatness of the universe is an extremely finely tuned condition for the universe that could have, in reality, been a multitude of different values than 'flat'):
Did the Universe Hyperinflate? – Hugh Ross – April 2010
Excerpt: Perfect geometric flatness is where the space-time surface of the universe exhibits zero curvature (see figure 3). Two meaningful measurements of the universe’s curvature parameter, ½k, exist. Analysis of the 5-year database from WMAP establishes that -0.0170 < ½k < 0.0068.4 Weak gravitational lensing of distant quasars by intervening galaxies places -0.031 < ½k < 0.009.5 Both measurements confirm the universe indeed manifests zero or very close to zero geometric curvature,,,
This following video, which I've listed previously, shows that the universe also has a primary characteristic of expanding/growing equally in all places,, which 'coincidentally' strongly corresponds to the 'e' in Euler's identity. 'e' is the constant that is used in all sorts of equations of math for finding what the true rates of growth and decay are for any given mathematical problem trying to find as such in this universe:
Every 3D Place Is Center In This Universe – 4D space/time – video
This following video shows how finely tuned the '4-Dimensional' expansion of the universe is (1 in 10^120);
Towards the end of the following video, Michael Denton speaks of the square root of negative 1 being necessary to understand the foundational quantum behavior of this universe. The square root of -1 is also 'coincidentally' found in Euler's identity:
Michael Denton – Mathematical Truths Are Transcendent And Beautiful – Square root of -1 is built into the fabric of reality – video"
I find it extremely strange that the enigmatic Euler's identity, which was deduced centuries ago, would find such striking correlation to how reality is actually found to be structured by modern science. In pi we have correlation to the 'sphere of the universe' as revealed by the Cosmic Background radiation, as well pi correlates to the finely-tuned 'geometric flatness' within the 'sphere of the universe' that has now been found. In 'e' we have the fundamental constant that is used for ascertaining exponential growth in math that strongly correlates to the fact that space-time is 'expanding/growing equally' in all places of the universe. In the square root of -1 we have what is termed a 'imaginary number', which was first proposed to help solve equations like x2+ 1 = 0 back in the 17th century, yet now, as Michael Denton pointed out in the preceding video, it is found that the square root of -1 is required to explain the behavior of quantum mechanics in this universe. The correlation of Euler's identity, to the foundational characteristics of how this universe is constructed and operates, points overwhelmingly to a transcendent Intelligence, with a capital I, which created this universe! It should also be noted that these universal constants, pi,e, and square root -1, were at first thought by many to be completely transcendent of any material basis, to find that these transcendent constants of Euler's identity in fact 'govern' material reality, in such a foundational way, should be enough to send shivers down any mathematicians spine.
Further discussion relating Euler's identity to General Relativity and Quantum Mechanics:
further notes:
in the equation e^pi*i + 1 = 0
,,,we find that pi is required in;
General Relativity (Einstein’s Equation)
,,,and we also find that the square root of negative 1 is required in;
Quantum Mechanics (Schrödinger’s Equations)
,,and we also find that e is required for;
e is required here in wave equations, in finding the distribution of prime numbers, in electrical theory, and is also found to be foundational to trigonometry.,,,
The various uses and equations of 'e' are listed at the bottom of the following page:
Stanford University mathematics professor - Dr. Keith Devlin
Here is a very well done video, showing the stringent 'mathematical proofs' of Euler's Identity:
Euler's identity - video
The mystery doesn't stop there, this following video shows how pi and e are found in Genesis 1:1 and John 1:1
Euler's Identity - God Created Mathematics - video
This following website, and video, has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages respectively, for Genesis 1:1 and John 1:1:
Fascinating Bible code – Pi and natural log – Amazing – video (of note: correct exponent for base of Nat Log found in John 1:1 is 10^40, not 10^65 as stated in the video)
Another transcendent mathematical structure that is found imbedded throughout our reality is Fibonacci's Number;
The golden ratio (tau) is seen in some surprising areas of mathematics. The ratio of consecutive Fibonacci numbers (1, 1, 2, 3, 5, 8, 13 . . ., each number being the sum of the previous two numbers) approaches the golden ratio, as the sequence gets infinitely long. The sequence is sometimes defined as starting at 0, 1, 1, 2, 3
Fibonacci Numbers – The Fingerprint of God - video - (See video description for a look at Euler’s Identity)
Golden Ratio in Human Body - video
Of note natural log e (which is found in Euler's identity and John 1:1) is also found to be necessary for calculating 'growth' of the 'golden spiral' of the Fibonacci number;
The Logarithmic Spiral
1. r increases proportionally and remains in proportion with the golden ratio as theta increases if we define the equation as above, multiplied by e^(a*phi). The reasons for this are more thoroughly discussed by Mukhopadhyay.
What Tau (The Golden Ratio; Fibonacci Number) Sounds Like - golden ratio set to music
The following, somewhat related, article is very interesting;
The Man Who Draws Pi - A Case of Acquired Savant Syndrome and Synesthesia Following a Brutal Assault:
Excerpt: “Everything that exists has geometry”, says JP, who acquired amazing mathematical abilities after a mugging incident in 2002. He was hit hard on the head, and he now experiences reality as mathematical fractals describable by equations. Light bouncing off a shiny car explodes into a fractal overlaying reality, the outer boundaries of objects are tangents, tiny pieces that change angles relative to one another and turn into picture frames of fractals during motion, and the boundaries of clouds and liquids are spiraling lines.,,, Mathematicians and physicists were taken aback: Some of JP’s drawings depict equations in math that hitherto were only presentable in graph form. Others depict actual electron interference patterns.,,, Despite his lack of prior training, JP is the only person in the world to have ever handdrawn meticulously accurate approximations of mathematical fractals using only straight lines. He can predict the vectors for prime numbers in his drawings, and his drawing of hf = mc^2, which contains all the style elements of his earliest drawings, is remarkably similar to an actual picture of electron interference patterns, which he found years after first drawing the pattern (see Fig 7, 8).
Romans 11:33
The following is just a cool video that makes you wonder:
What pi sounds like when put to music – cool video
This following video is very interesting for revealing how difficult it was for mathematicians to actually 'prove' that mathematics was even true in the first place:
Georg Cantor - The Mathematics Of Infinity - video
entire video: BBC-Dangerous Knowledge (Part 1-10)
Godel's story on the incompleteness theorem can be picked up here in part 7 of the preceding video:
BBC-Dangerous Knowledge (Part 7-10)
As you can see, somewhat from the preceding video, mathematics cannot be held to be 'true' unless an assumption for a highest transcendent infinity is held to be true. A highest infinity which Cantor, and even Godel, held to be God. Thus this following formal proof, which was referred to at the end of the preceding video, shows that math cannot be held to be consistently true unless the highest infinity of God is held to be consistently true as a starting assumption:
Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century
Excerpt: Gödel’s Incompleteness Theorem says:
“Anything you can draw a circle around cannot explain itself without referring to something outside the circle - something you have to assume to be true but cannot prove "mathematically" to be true.”
A Biblical View of Mathematics - Vern Poythress - doctorate in theology, PhD in Mathematics (Harvard)
Excerpt: only on a thoroughgoing Biblical basis can one genuinely understand and affirm the real agreement about mathematical truths.
Taking God Out of the Equation - Biblical Worldview - by Ron Tagliapietra - January 1, 2012
Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties.
1. Validity . . . all conclusions are reached by valid reasoning.
2. Consistency . . . no conclusions contradict any other conclusions.
3. Completeness . . . all statements made in the system are either true or false.
The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem.
Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation.
Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3).
The God of the Mathematicians - Goldman
Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” - Kurt Gödel - (Gödel is considered by many to be the greatest mathematician of the 20th century)
This following site is a easy to use, and understand, interactive website that takes the user through what is termed 'Presuppositional apologetics'. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place.
Presuppositional Apologetics - easy to use interactive website
Random Chaos vs. Uniformity Of Nature - Presuppositional Apologetics - video
Presuppositional Apologetics (1 of 5) - video - Atheist vs. Christian debate on the street (Law of Non-Contradiction featured prominently in the debate)
Why should the human mind be able to comprehend reality so deeply? - referenced article
Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place:
BRUCE GORDON: Hawking's irrational arguments - October 2010
Excerpt: What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science.
Dr. Bruce Gordon - The Absurdity Of The Multiverse & Materialism in General - video
This 'lack of a guarantee', for trusting our perceptions and reasoning in science to be trustworthy in the first place, even extends into evolutionary naturalism itself;
Should You Trust the Monkey Mind? - Joe Carter
What is the Evolutionary Argument Against Naturalism? ('inconsistent identity' of cause leads to failure of absolute truth claims for materialists) (Alvin Plantinga) - video
Alvin Plantinga - Science and Faith Conference - video
Philosopher Sticks Up for God
Excerpt: Theism, with its vision of an orderly universe superintended by a God who created rational-minded creatures in his own image, “is vastly more hospitable to science than naturalism,” with its random process of natural selection, he (Plantinga) writes. “Indeed, it is theism, not naturalism, that deserves to be called ‘the scientific worldview.’”
~ Alvin Plantinga
Can atheists trust their own minds? - William Lane Craig On Alvin Plantinga's Evolutionary Argument Against Naturalism - video
The following interview is sadly comical as a evolutionary psychologist realizes that neo-Darwinism can offer no guarantee that our faculties of reasoning will correspond to the truth, not even for the truth that he is purporting to give in the interview, (which begs the question of how was he able to come to that particular truthful realization, in the first place, if neo-Darwinian evolution were actually true?);
Evolutionary guru: Don't believe everything you think - October 2011
Interviewer: You could be deceiving yourself about that.(?)
Evolutionary Psychologist: Absolutely.
Related article;
Evolutionary Guru Deceives Himself - October 12, 2011
Of related note:
"nobody to date has yet found a demarcation criterion according to which Darwin(ism) can be described as scientific" - Imre Lakatos (November 9, 1922 – February 2, 1974) a philosopher of mathematics and science, quote was as stated in 1973 LSE Scientific Method Lecture
Science and Pseudoscience - Imre Lakatos - exposing Darwinism as a ‘degenerate science program’, as a pseudoscience, using Lakatos's rigid criteria
CS Lewis – Mere Christianity
"But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?" - Charles Darwin - Letter To William Graham - July 3, 1881
“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms.” J. B. S. Haldane ["When I am dead," in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.
The ultimate irony is that this philosophy implies that Darwinism itself is just another meme, competing in the infectivity sweepstakes by attaching itself to that seductive word “science.” Dawkins ceaselessly urges us to be rational, but he does so in the name of a philosophy that implies that no such thing as rationality exists because our thoughts are at the mercy of our genes and memes. The proper conclusion is that the Dawkins poor brain has been infected by the Darwin meme, a virus of the mind if ever there was one, and we wonder if he will ever be able to find the cure.
~ Phillip Johnson
Is Randomness really the rational alternative to the ‘First Mover’ of Theists?
John Lennox - Science Is Impossible Without God - Quotes - video remix
Absolute Truth - Frank Turek - video
This following video humorously reveals the bankruptcy that atheists have in trying to ground beliefs within a materialistic, genetic reductionism, worldview;
John Cleese – The Scientists – humorous video
The following study is not surprising after realizing atheists have no solid basis within their worldview for grounding their claims about absolute truth;
Look Who's Irrational Now
It is also interesting to point out that this ‘inconsistent identity’, that was pointed out by Alvin Plantinga, which leads to the failure of neo-Darwinists to be able to make absolute truth claims for their beliefs, is what also leads to the failure of neo-Darwinists to be able to account for objective morality, in that neo-Darwinists cannot maintain a consistent identity towards a stable, unchanging, cause for objective morality within their lives;
The Knock-Down Argument Against Atheist Sam Harris' moral argument – William Lane Craig – video
Stephen Meyer - Morality Presupposes Theism (1 of 4) - video
Hitler & Darwin, pt. 2: Richard Weikart on Evolutionary Ethics - podcast
Top Ten Reasons We Know the New Testament is True – Frank Turek – video – November 2011
(41:00 minute mark – Despite what is commonly believed (of being 'good enough' to go to heaven, in reality both Mother Teresa and Hitler fall short of the moral perfection required to meet the perfection of God’s objective moral code)
Objective Morality – The Objections – Frank Turek – video
This following short video clearly shows, in a rather graphic fashion, the ‘moral dilemma' that atheists face when trying to ground objective morality;
Cruel Logic – video
Description; A brilliant serial killer videotapes his debates with college faculty victims. The topic of his debate with his victim: His moral right to kill them.
"Atheists may do science, but they cannot justify what they do. When they assume the world is rational, approachable, and understandable, they plagiarize Judeo-Christian presuppositions about the nature of reality and the moral need to seek the truth. As an exercise, try generating a philosophy of science from hydrogen coming out of the big bang. It cannot be done. It’s impossible even in principle, because philosophy and science presuppose concepts that are not composed of particles and forces. They refer to ideas that must be true, universal, necessary and certain." Creation-Evolution Headlines
Atheism cannot ground Morality or Science
As well, as should be blatantly obvious, mathematics cannot be grounded in a materialistic worldview;
Mathematics is the language with which God has written the universe.
Galileo Galilei
The Unreasonable Effectiveness of Mathematics in the Natural Sciences - Eugene Wigner
The Underlying Mathematical Foundation Of The Universe -Walter Bradley - video
How the Recent Discoveries Support a Designed Universe - Dr. Walter L. Bradley - paper
The Five Foundational Equations of the Universe and Brief Descriptions of Each:
How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things?
— Albert Einstein
This following site has a brief discussion on the fact that 'transcendent math' is not an invention of man but that transcendent math actually dictates how 'reality' is constructed and operates:
"The reason that mathematics is so effective in capturing, expressing, and modeling what we call empirical reality is that there is a ontological correspondence between the two - I would go so far as to say that they are the same thing."
Richard Sternberg - Pg. 8 How My Views On Evolution Evolved
The following site lists the unchanging constants of the universe:
Systematic Search for Expressions of Dimensionless Constants using the NIST database of Physical Constants
Excerpt: The National Institute of Standards and Technology lists 325 constants on their website as ‘Fundamental Physical Constants’. Among the 325 physical constants listed, 79 are unitless in nature (usually by defining a ratio). This produces a list of 246 physical constants with some unit dependence. These 246 physical constants can be further grouped into a smaller set when expressed in standard SI base units.,,,
The numerical values of the transcendent universal constants in physics, which are found for gravity which holds planets, stars and galaxies together; for the weak nuclear force which holds neutrons together; for electromagnetism which allows chemical bonds to form; for the strong nuclear force which holds protons together; for the cosmological constant of space/energy density which accounts for the universe’s expansion; and for many other constants which are universal in their scope, 'just so happen' to be the exact numerical values they need to be in order for life, as we know it, to be possible in this universe. A more than slight variance in the value of any individual universal constant, over the entire age of the universe, would have undermined the ability of the entire universe to have life as we know it. To put it mildly, this is a irreducibly complex condition.
Finely Tuned Gravity (1 in 10^40 tolerance; which is just one inch of tolerance allowed on a imaginary ruler stretching across the diameter of the entire universe) - video
Anthropic Principle - God Created The Universe - Michael Strauss PhD. - video
Can Life Be Merely an Accident? (Dr. Robert Piccioni - Fine Tuning) - video
“If we modify the value of one of the fundamental constants, something invariably goes wrong, leading to a universe that is inhospitable to life as we know it. When we adjust a second constant in an attempt to fix the problem(s), the result, generally, is to create three new problems for every one that we “solve.” The conditions in our universe really do seem to be uniquely suitable for life forms like ourselves, and perhaps even for any form of organic complexity." Gribbin and Rees, “Cosmic Coincidences”, p. 269
Finely Tuned Universe - video
The Case For The Creator - Lee Strobel - video
This following site has a rigorously argued defense of the fine-tuning(teleological) argument:
The Teleological Argument: An Exploration of the Fine-Tuning of the Universe - ROBIN COLLINS
Here are a few sites that list the finely tuned universal constants:
Fine-Tuning For Life In The Universe
Evidence for the Fine Tuning of the Universe
Here is a defense against Victor Stenger's “no fine-tuning” claims:
Many of Victor Stenger’s “no fine-tuning” claims dubbed “highly problematic” (in new peer reviewed paper) - January 2012
Here is a layman friendly review of the preceding paper:
Is fine-tuning a fallacy? - January 2012
Psalm 119:89-90
On and on through each universal constant scientists analyze, they find such unchanging precision from the universe's creation.
As a side note to this, it seems even the 'exotic' virtual photons, which fleetingly pop into and out of existence, are tied directly to the anthropic principle through the 1 in 10^120 cosmological constant for dark energy:
Shining new light on dark energy with galaxy clusters - December 2010
Excerpt: "Each model for dark energy makes a prediction that you should see this many clusters, with this particular mass, this particular distance away from us," Sehgal said. Sehgal tested these predictions by using data from the most massive galaxy clusters. The results support the standard, vacuum-energy model for dark energy.
Further note:
Virtual Particles, Anthropic Principle & Special Relativity - Michael Strauss PhD. Particle Physics - video
Here is an interesting experiment accomplished with 'virtual' particles:
Researchers create light from 'almost nothing' - June 2011
Of interest to the unchanging nature of the transcendent universal 'information' constants which govern this universe, it should be noted that the four primary forces/constants of the universe (gravity, electromagnetism, strong and weak nuclear forces) are said to be 'mediated at the speed of light' by mass-less 'mediator bosons', yet the speed of light constant is shown to be transcendent of any underlying material basis in the first place.
GRBs Expand Astronomers' Toolbox - Nov. 2009
Excerpt: a detailed analysis of the GRB (Gamma Ray Burst) in question demonstrated that photons of all energies arrived at essentially the same time. Consequently, these results falsify any quantum gravity models requiring the simplest form of a frothy space.
I would also like to point out that since time, as we understand it, comes to a complete stop at the speed of light this gives these four fundamental universal constants the characteristic of being timeless, and thus unchanging, as far as the temporal mass of this universe is concerned. In other words, we should not a-prori expect that which is timeless in nature to ever change in value. Yet contrary to what would seem to be so obvious about the a-piori stability of constants that we should expect, when scientists actually measure for variance in the fundamental constants they always end up being 'surprised' by the stability they find even though it is not to be a-priori expected:
Latest Test of Physical Constants Affirms Biblical Claim - Hugh Ross - September 2010
Excerpt: The team’s measurements on two quasars (Q0458- 020 and Q2337-011, at redshifts = 1.561 and 1.361, respectively) indicated that all three fundamental physical constants have varied by no more than two parts per quadrillion per year over the last ten billion years—a measurement fifteen times more precise, and thus more restrictive, than any previous determination. The team’s findings add to the list of fundamental forces in physics demonstrated to be exceptionally constant over the universe’s history. This confirmation testifies of the Bible’s capacity to predict accurately a future scientific discovery far in advance. Among the holy books that undergird the religions of the world, the Bible stands alone in proclaiming that the laws governing the universe are fixed, or constant.
This following site discusses the many technical problems they had with the paper that recently (2010) tried to postulate variance within the fine structure constant:
Psalm 119:89-91
According to the materialistic philosophy, there are no apparent reasons why the value of each transcendent universal constant could not have varied dramatically from what they actually are. In fact, the presumption of materialism expects a fairly large amount of flexibility, indeed chaos, in the underlying constants for the universe, since the constants themselves are postulated to randomly 'emerge' from some, as far as I can tell, completely undefined material basis at the Big Bang.
All individual constants are of such a high degree of precision as to defy human comprehension, much less comparison to the most precise man-made machine (1 in 10^22 - gravity wave detector). For example, the cosmological constant (dark energy) is balanced to 1 part in 10^120 and the mass density constant is balanced to 1 part in 10^60.
To clearly illustrate the stunning, incomprehensible, degree of fine-tuning we are dealing with in the universe, Dr. Ross has used the illustration of adding or subtracting a single dime's worth of mass in the observable universe, during the Big Bang, would have been enough of a change in the mass density of the universe to make life impossible in this universe. This word picture he uses, with the dime, helps to demonstrate a number used to quantify that fine-tuning of mass for the universe, namely 1 part in 10^60 for mass density. Compared to the total mass of the observable universe, 1 part in 10^60 works out to about a tenth part of a dime, if not smaller.
Where Is the Cosmic Density Fine-Tuning? - Hugh Ross
Actually, 1 in 10 to the 60th for the fine-tuning of the mass density for the universe may be equal to just 1 grain of sand instead of a tenth of a dime!
As well it turns out even the immense size of the universe is necessary for life:
Evidence for Belief in God - Rich Deem
Excerpt: Isn't the immense size of the universe evidence that humans are really insignificant, contradicting the idea that a God concerned with humanity created the universe? It turns out that the universe could not have been much smaller than it is in order for nuclear fusion to have occurred during the first 3 minutes after the Big Bang. Without this brief period of nucleosynthesis, the early universe would have consisted entirely of hydrogen. Likewise, the universe could not have been much larger than it is, or life would not have been possible. If the universe were just one part in 10^59 larger, the universe would have collapsed before life was possible. Since there are only 10^80 baryons in the universe, this means that an addition of just 10^21 baryons (about the mass of a grain of sand) would have made life impossible. The universe is exactly the size it must be for life to exist at all.
Here is a video of Astrophysicist Hugh Ross explaining the anthropic cosmological principle behind the immense size of the universe as well as behind the ancient age of the universe:
We Live At The Right Time In Cosmic History - Hugh Ross - video
Here is a lesser quality video on the same subject:
We Exist At The Right Time In Cosmic History – Hugh Ross – video
I think this following music video and Bible verse sum up nicely what these transcendent universal constants are telling us about reality:
My Beloved One - Inspirational Christian Song - video
Hebrews 11:3
Although 1 part in 10^120 and 1 part in 10^60 far exceeds, by many orders of magnitude, the highest tolerance ever achieved in any man-made machine, which is 1 part in 10^22 for a gravity wave detector, according to esteemed British mathematical physicist Roger Penrose (1931-present), the odds of one particular individual constant, the 'original phase-space volume' of the universe, required such precision that the "Creator’s aim must have been to an accuracy of 1 part in 10^10^123”. This number is gargantuan. If this number were written out in its entirety, 1 with 10^123 zeros to the right, it could not be written on a piece of paper the size of the entire visible universe, even if a number were written down on each sub-atomic particle in the entire universe, since the universe only has 10^80 sub-atomic particles in it.
Roger Penrose discusses initial entropy of the universe. - video
Excerpt: "The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the "source" of the Second Law (Entropy)."
How special was the big bang? - Roger Penrose
Excerpt: This now tells us how precise the Creator's aim must have been: namely to an accuracy of one part in 10^10^123.
(from the Emperor’s New Mind, Penrose, pp 339-345 - 1989)
As well, contrary to speculation of 'budding universes' arising from Black Holes, Black Hole singularities are completely opposite the singularity of the Big Bang in terms of the ordered physics of entropic thermodynamics. In other words, Black Holes are singularities of destruction, and disorder, rather than singularities of creation and order.
Roger Penrose - How Special Was The Big Bang?
Entropy of the Universe - Hugh Ross - May 2010
This 1 in 10^10^123 number, for the time-asymmetry of the initial state of the 'ordered entropy' for the universe, also lends strong support for 'highly specified infinite information' creating the universe since;
"Gain in entropy always means loss of information, and nothing more."
Gilbert Newton Lewis - Eminent Chemist
Tom Siegfried, Dallas Morning News, 5/14/90 - Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article
This staggering level of precision, for each individual universal constant scientists can measure, is exactly why many theoretical physicists have suggested the existence of a 'super-calculating intellect' to account for this fine-tuning. This is precisely why the anthropic hypothesis has gained such a strong foothold in many scientific circles. American geneticist Robert Griffiths jokingly remarked about these recent developments;
"If we need an atheist for a debate, I go to the philosophy department. The physics department isn't much use anymore."
Further comments by leading scientists in astrophysics:
Nobel Prize winning Physicist Charles Townes
Physicist and Nobel laureate Arno Penzias
Michael Turner - (Astrophysicist at Fermilab)
John O'Keefe (astronomer at NASA)
Alan Sandage (preeminent Astronomer)
(NASA Astronomer Robert Jastrow, God and the Astronomers, p. 116.)
"Is he worthy to be called a man who attributes to chance, not to an intelligent cause, the constant motion of the heavens, the regular courses of the stars, the agreeable proportion and connection of all things, conducted with so much reason that our intellect itself is unable to estimate it rightly? When we see machines move artificially, as a sphere, a clock, or the like, do we doubt whether they are the productions of reason? -
Cicero (45 BC)
Proverbs 8:29-30
Here is a somewhat related foot note refuting the materialistic inspired conjecture of rapid inflation at the initial phase of the Big Bang (turns out rapid inflation was initially postulated to 'smooth away' the 'problems' of fine tuning):
One of cosmic (Rapid) inflation theory’s creators now questions own theory - April 2011
Excerpt: (Rapid) Inflation adds a whole bunch of really unlikely metaphysical assumptions — a new force field that has a never-before-observed particle called the “inflaton”, an expansion faster than the speed of light, an interaction with gravity waves which are themselves only inferred– just so that it can explain the unlikely contingency of a finely-tuned big bang.
But instead of these extra assumptions becoming more-and-more supported, the trend went the opposite direction, with more-and-more fine-tuning of the inflation assumptions until they look as fine-tuned as Big Bang theories. At some point, we have “begged the question”. Frankly, the moment we add an additional free variable, I think we have already begged the question. In a Bayesean comparison of theories, extra variables reduce the information content of the theory, (by the so-called Ockham factor), so these inflation theories are less, not more, explanatory than the theory they are supposed to replace.,,, after 20 years of work, if we haven’t made progress, but have instead retreated, it is time to cut bait.
The only other theory possible for the universe’s creation, other than a God-centered hypothesis, is some purposeless materialistic theory based on blind chance. Materialistic blind chance tries to escape being completely crushed, by the overwhelming weight of evidence for design in the fine-tuning of the universe, by appealing to an infinity of other un-testable universes in which all other possibilities have been played out. Yet there is absolutely no hard physical evidence to support this blind chance conjecture. In fact, the 'infinite multiverse' conjecture suffers from some very serious, and deep, flaws of logic.
The Absurdity of Inflation, String Theory & The Multiverse - Dr. Bruce Gordon - video
The End Of Materialism? - Dr. Bruce Gordon
* In the multiverse, anything can happen for no reason at all.
* In other words, the materialist is forced to believe in random miracles as a explanatory principle.
* In a Theistic universe, nothing happens without a reason. Miracles are therefore intelligently directed deviations from divinely maintained regularities, and are thus expressions of rational purpose.
* Scientific materialism is (therefore) epistemically self defeating: it makes scientific rationality impossible.
As well, this hypothetical infinite multiverse obviously begs the question of exactly which laws of physics, arising from which material basis, are telling all the other natural laws in physics what, how and when, to do the many precise unchanging things they do in these other universes? Exactly where is this universe creating machine to be located? Moreover, if an infinite number of other possible universes must exist in order to explain the fine tuning of this one, then why is it not also infinitely possible for a infinitely powerful and transcendent Creator to exist? Using the materialist same line of reasoning for an infinity of multiverses to 'explain away' the extreme fine-tuning of this one we can thusly surmise; If it is infinitely possible for God to exist then He, of 100% certainty, must exist no matter how small the probability is of His existence in one of these other infinity of universes, and since He certainly must exist in some possible world then he must exist in all possible worlds since all possibilities in all universes automatically become subject to Him since He is, by definition, transcendent and infinitely Powerful.,,, To clearly illustrate the level of absurdity of what materialists now consider their cutting edge science: The materialistic conjecture of an infinity of universes to 'explain away' the fine tuning of this one also insures the 100% probability of the existence of Pink Unicorns no matter how small the probability is of them existing (Though since a pink unicorn is a 'contingent being', instead of a 'necessary being' like God, this means that pink unicorns will only exist in 'some' possible worlds in the multiverse scenario). Thus it is self-evident that the atheistic materialists have painted themselves into a inescapable corner of logical absurdities in trying to find an escape from the Theistic implications we are finding for the fine-tuning of this universe.
The preceding argument has actually been made into a formal philosophical proof:
The Ontological Argument (The Introduction) - video
Ontological Argument For God From The Many Worlds Hypothesis - William Lane Craig - video
God Is Not Dead Yet – William Lane Craig – Page 4
The ontological argument. Anselm’s famous argument has been reformulated and defended by Alvin Plantinga, Robert Maydole, Brian Leftow, and others. God, Anselm observes, is by definition the greatest being conceivable. If you could conceive of anything greater than God, then that would be God. Thus, God is the greatest conceivable being, a maximally great being. So what would such a being be like? He would be all-powerful, all-knowing, and all-good, and he would exist in every logically possible world. But then we can argue:
1. It is possible that a maximally great being (God) exists.
5. Therefore, a maximally great being exists in the actual world.
6. Therefore, a maximally great being exists.
7. Therefore, God exists.
Now it might be a surprise to learn that steps 2–7 of this argument are relatively uncontroversial. Most philosophers would agree that if God’s existence is even possible, then he must exist. So the whole question is: Is God’s existence possible? The atheist has to maintain that it’s impossible that God exists. He has to say that the concept of God is incoherent, like the concept of a married bachelor or a round square. But the problem is that the concept of God just doesn’t appear to be incoherent in that way. The idea of a being which is all-powerful, all knowing, and all-good in every possible world seems perfectly coherent. And so long as God’s existence is even possible, it follows that God must exist.
I like the concluding comment about the ontological argument from the following Dr. Plantinga video:
"God then is the Being that couldn't possibly not exit."
Ontological Argument – Dr. Plantinga (3:50 minute mark)
As weird as it may sound, this following video refines the Ontological argument into a proof that, because of the characteristic of ‘maximally great love’, God must exist in more than one person:
The Ontological Argument for the Triune God - video
Here are some more resources outlining the absurdity of the multiverse conjecture:
The Multiverse Gods, final part - Robert Sheldon - June 2011
Excerpt: And so in our long journey through the purgatory of multiverse-theory, we discover as we previously discovered for materialism, there are two solutions, and only two. Either William Lane Craig is correct and multiverse-theory is just another ontological proof a personal Creator, or we follow Nietzsche into the dark nihilism of the loss of reason. Heaven or hell, there are no other solutions.,_final_part.thtml
Atheism In Crisis - The Absurdity Of The Multiverse - video
Multiverse and the Design Argument - William Lane Craig
Excerpt: Roger Penrose of Oxford University has calculated that the odds of our universe’s low entropy condition obtaining by chance alone are on the order of 1 in 10^10(123), an inconceivable number. If our universe were but one member of a multiverse of randomly ordered worlds, then it is vastly more probable that we should be observing a much smaller universe. For example, the odds of our solar system’s being formed instantly by the random collision of particles is about 1 in 10^10(60), a vast number, but inconceivably smaller than 1 in 10^10(123). (Penrose calls it “utter chicken feed” by comparison [The Road to Reality (Knopf, 2005), pp. 762-5]). Or again, if our universe is but one member of a multiverse, then we ought to be observing highly extraordinary events, like horses’ popping into and out of existence by random collisions, or perpetual motion machines, since these are vastly more probable than all of nature’s constants and quantities’ falling by chance into the virtually infinitesimal life-permitting range. Observable universes like those strange worlds are simply much more plenteous in the ensemble of universes than worlds like ours and, therefore, ought to be observed by us if the universe were but a random member of a multiverse of worlds. Since we do not have such observations, that fact strongly disconfirms the multiverse hypothesis. On naturalism, at least, it is therefore highly probable that there is no multiverse. — Penrose puts it bluntly “these world ensemble hypothesis are worse than useless in explaining the anthropic fine-tuning of the universe”.
Michael Behe has a profound answer to the infinite multiverse argument in “Edge of Evolution”. If there are infinite universes, then we couldn’t trust our senses, because it would be just as likely that our universe might only consist of a human brain that pops into existence which has the neurons configured just right to only give the appearance of past memories. It would also be just as likely that we are floating brains in a lab, with some scientist feeding us fake experiences. Those scenarios would be just as likely as the one we appear to be in now (one universe with all of our experiences being “real”). Bottom line is, if there really are an infinite number of universes out there, then we can’t trust anything we perceive to be true, which means there is no point in seeking any truth whatsoever.
Here is a more formal refutation of the multiverse conjecture;
Bayesian considerations on the multiverse explanation of cosmic fine-tuning - V. Palonen
Conclusions: ,,, The self-sampling assumption approach by Bostrom was shown to be inconsistent with probability theory. Several reasons were then given for favoring the ‘this universe’ (TU) approach and main criticisms against TU were answered. A formal argument for TU was given based on our present knowledge. The main result is that even under a multiverse we should use the proposition “this universe is fine-tuned” as data, even if we do not know the ‘true index’ 14 of our universe. It follows that because multiverse hypotheses do not predict fine-tuning for this particular universe any better than a single universe hypothesis, multiverse hypotheses are not adequate explanations for fine-tuning. Conversely, our data on cosmic fine-tuning does not lend support to the multiverse hypotheses. For physics in general, irrespective of whether there really is a multiverse or not, the common-sense result of the above discussion is that we should prefer those theories which best predict (for this or any universe) the phenomena we observe in our universe.
Another escape that materialists have postulated was a slightly constrained 'string-theoretic' multiverse. The following expert shows why the materialistic postulation of 'string theory' is, for all intents and purposes of empirical science, a complete waste of time and energy:
This Week’s Hype - August 2011
Excerpt: ‘It’s well-known that one can find Stephen Hawking’s initials, and just about any other pattern one can think of somewhere in the CMB data.,, So, the bottom line is that they see nothing, but a press release has been issued about how wonderful it is that they have looked for evidence of a Multiverse, without mentioning that they found nothing.’ – Peter Woit PhD.
Here is another entry from Professor Peter Woit's blog where he has been fairly busy showing the failure of string theory to pass any of the experimental tests that have been proposed and put to any of its predictions:
String Theory Fails Another Test, the “Supertest” - December 2010
Excerpt: It looks like string theory has failed the “supertest”. If you believe that string theory “predicts” low-energy supersymmetry, this is a serious failure.
This Week’s Hype – November 3, 2011 by Peter Woit (Ph.D. in theoretical physics and a lecturer in mathematics at Columbia)
Excerpt: the LHC has turned out to be dud, producing no black holes or extra dimensions,
SUSY Still in Hiding - Prof. Peter Woit - Columbia University - February 2012
Excerpt: The LHC (Large Haldron Collider) has done an impressive job of investigating and leaving in tatters the SUSY/extra-dimensional speculative universe that has dominated particle theory for much of the last thirty years, and this is likely to be one of its main legacies. These fields will undoubtedly continue to play a large role in particle theory, no matter how bad the experimental situation gets, as their advocates argue “Never, never, never give up!”, but fewer and fewer people will take them seriously.
The Ultimate Guide to the Multiverse - Peter Woit - November 2011
Excerpt: The multiverse propaganda machine has now been going full-blast for more than eight years, since at least 2003 or so, and I’m beginning to wonder “what’s next?”. Once your ideas about theoretical physics reach the point of having a theory that says nothing at all, there’s no way to take this any farther. You can debate the “measure problem” endlessly in academic journals, but the cover stories about how you have revolutionized physics can only go on so long before they reach their natural end of shelf-life. This has gone on longer than I’d ever have guessed, but surely it has to end sooner or later, - Peter Woit - Senior Lecturer at Columbia University
Integral challenges physics beyond Einstein - June 30, 2011
Excerpt: However, Integral’s observations are about 10,000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.,,, “This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories,” says Dr Laurent.
“string theory, while dazzling, has outrun any conceivable experiment that could verify it”
Excerpt: string theory, while dazzling, has outrun any conceivable experiment that could verify it—there’s zero proof that it describes how nature works.
Though to be fair, a subset of the math of the string hypothesis did get lucky with a interesting 'after the fact' prediction (post-diction) of a already known phenomena. (But this is very similar to finding an arrow on a wall, drawing a circle around it, and then declaring that you hit a bulls-eye!):
A first: String theory predicts an experimental result:
Excerpt: Not to say that string theory has been proved. Clifford Johnson of the University of Southern California, the string theorist on the panel, was very clear about that.
Despite this contrived 'after the fact' postdiction of a physical phenomena, string theory is constantly suffering severe setbacks in other areas, thus string theory has yet to even establish itself as a legitimate line of inquiry within science.
Testing Creation Using the Proton to Electron Mass Ratio
Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,, For the first time, limits on the possible variability of the electron to proton mass ratio are low enough to constrain dark energy models that “invoke rolling scalar fields,” that is, some kind of cosmic quintessence. They also are low enough to eliminate a set of string theory models in physics. That is these limits are already helping astronomers to develop a more detailed picture of both the cosmic creation event and of the history of the universe. Such achievements have yielded, and will continue to yield, more evidence for the biblical model for the universe’s origin and development.
As well, even if the whole of string theory were to have been found to be true, it would have done nothing to help the materialist, and in reality, would have only added another level of 'finely tuned complexity' for us to deal with without ever truly explaining the origination of that logically coherent complexity (Logos) of the string theory in the first place.,,, Bruce Gordon, after a thorough analysis of the entire string theory framework, states the following conclusion on page 72 of Robert J. Spitzer's book 'New Proofs For The Existence Of God':
Sean Carroll channels Giordano Bruno - Robert Sheldon - November 2011
Excerpt: 'In fact, on Lakatos' analysis, both String Theory and Inflation are clearly "degenerate science programs".'
This following article illustrates just how far string theory would miss the mark of explaining the fine-tuning we see even if it were found to be true:
Baron Münchhausen and the Self-Creating Universe:
Roger Penrose has calculated that the entropy of the big bang itself, in order to give rise to the life-permitting universe we observe, must be fine-tuned to one part in e10exp(123)≈10^10exp(123). Such complex specified conditions do not arise by chance, even in a string-theoretic multiverse with 10^500 different configurations of laws and constants, so an intelligent cause may be inferred. What is more, since it is the big bang itself that is fine-tuned to this degree, the intelligence that explains it as an effect must be logically prior to it and independent of it – in short, an immaterial intelligence that transcends matter, energy and space-time. (of note: 10^10^123 minus 10^500 is still, for all practical purposes, 10^10^123)
Infinitely wrong - Sheldon - November 2010
Excerpt: So you see, they gleefully cry, even [1 / 10^(10^123)] x ∞ = 1! Even the most improbable events can be certain if you have an infinite number of tries.,,,Ahh, but does it? I mean, zero divided by zero is not one, nor is 1/∞ x ∞ = 1. Why? Well for starters, it assumes that the two infinities have the same cardinality.
On Signature in the Cell, Robert Saunders Still Doesn't Get It - Jonathan M. - December 2011
Excerpt: On the issue of fine tuning, Saunders appeals to the famous anthropic argument, noting, 'The fine-tuning argument has always seemed to me to be somewhat tautologous. Had the constants been different, we would not be here to look at the Universe and its physical constants. We have a sample size of 1. Exactly 1.'
William Lane Craig has effectively countered this argument:
'[S]uppose you are dragged before a firing squad of 100 trained marksmen, all of them with rifles aimed at your heart, to be executed. The command is given; you hear the deafening sound of the guns. And you observe that you are still alive, that all of the 100 marksmen missed! Now while it is true that, "You should not be surprised that you do not observe that you are dead," nonetheless it is equally true that, "You should be surprised that you do observe that you are alive."
Since the firing squad's missing you altogether is extremely improbable, the surprise expressed is wholly appropriate, though you are not surprised that you do not observe that you are dead, since if you were dead you could not observe it.
Stephen Hawking created quite a stir with his book 'The Grand Design', in Sept. 2010, by claiming that M-theory, the dubious, and completely unsubstantiated, step child of string theory, eliminated the need for God to explain the origin of the universe. Many physicists objected to Hawking's claim, but perhaps the best argument against Hawking's claim is Hawking's very own words:
Hawking gave the game away for his 'omnipotent' claims for M-theory with this quote that he gave in response to a question from Larry King at the beginning of a interview King had with Hawking about his book:
Larry King: “If you could time travel would you go forward or backward?”
Stephen Hawking: “I would go forward and find if M-theory is indeed the theory of everything.”
Larry King and others; “Quietly laugh”
So here we have Hawking making sweeping claims with a theory that, by his own admission, is not even shown to be a complete 'theory of everything' in the first place. Further critiques of Hawking's 'omnipotent' M-theory, by leading experts in the field, can be found on the following site, as well the video of the interview between King and Hawking:
Barr on Hawking - Barry Arrington - September 2010
of related note:
Cosmologists Forced to “In the Beginning” - January 2011
Excerpt: In New Scientist today, Lisa Grossman reported on ideas presented at a conference entitled “State of the Universe” convened last week in honor of Stephen Hawking’s 70th birthday. Some birthday; he got “the worst presents ever,” she said: “two bold proposals posed serious threats to our existing understanding of the cosmos.” Of the two, the latter is most serious: a presentation showing reasons why “the universe is not eternal, resurrecting the thorny question of how to kick-start the cosmos without the hand of a supernatural creator.” It is well-known that Hawking has preferred a self-existing universe. Grossman quotes him saying, “‘A point of creation would be a place where science broke down. One would have to appeal to religion and the hand of God,’ Hawking told the meeting, at the University of Cambridge, in a pre-recorded speech.”
William Lane Craig: The Origins of the Universe - Has Hawking Eliminated God? Cambridge October 2011 - video lecture
This following quote, in critique of Hawking's book, is from Roger Penrose who worked closely with Stephen Hawking in the 1970's and 80's:
Roger Penrose Debunks Stephen Hawking's New Book 'The Grand Design' - video
As a interesting sidelight to Penrose debunking Hawking's theory for how the universe began, it seems that Roger Penrose's own pet 'non-theistic' theory, for how the universe began without the need for God, also humorously fails under scrutiny:
Mr Hoyle, call your office - Robert Sheldon - November 2010
Excerpt: I think I understand what Penrose is saying, and the truly weird thing about it is that I was introduced to this theory from a DC comic book circa 1967, whereas Sir Roger only just discovered it in 2007.
BRUCE GORDON: Hawking's irrational arguments - October 2010
Here is the last power-point slide of the preceding video:
The End Of Materialism?
Many times a atheist will object to Theism by saying something along the lines of this following quote by a prominent atheist:
Yet for a atheist/materialist to say that science can ONLY study law-like events that can faithfully be predicted, time after time, is sheer hypocrisy on the part of the atheist, for indeed the atheist himself holds that strictly random, non-regular, non-law-like, indeed 'CHAOTIC' events are responsible for why the universe, and all life in it, originated, and ‘evolves’, in the first place. The atheist’s own worldview, far from demanding regularity in nature, demands that random, and thus by definition ‘non-predictable’, events be at the base of all reality and of all life. Needless to say, being ‘non-predictably random’ is the exact polar opposite of the predictability of science that atheists accuse Theists of violating when Theists posit the rational Mind of God for the origin of the universe and/or all life in it. In truth, the atheist is just extremely prejudiced as to exactly what, or more precisely WHOM, he, or she, will allow to be the source for the random, irregular, non-predicatable, non-law-like, events that they themselves require to be at the very basis of the creation events of the universe and all life in it.,,, Moreover, unlike atheistic neo-Darwinian evolution, which continually requires these non-predictable, non-law like, random events, to continually be present within the base of reality (which is the antithesis of ‘science’ according to the atheist's own criteria for excluding any Theistic answer to ever be plausible), Intelligent Design finds itself only requiring that this seemingly ‘random’, top down, implementation of novel genetic, and body plan, information at the inception of each new parent species, with all sub-speciation events thereafter, from the parent species, following a law-like adherence to the principle of genetic entropy. A principle that happens to be in accordance with perhaps the most rigorously established law in science, the second law of thermodynamics, as well as in accordance with the law of Conservation of Information as laid out by Dr. Dembski and Marks.
The following is a humorous account of the preceding:
Blackholes- The neo-Darwinists ultimate ‘god of randomness’ which can create all life in the universe (according to them)
Further notes:
The Effect of Infinite Probabilistic Resources on ID and Science (Part 2) - Eric Holloway - July 2011
Excerpt:,, since orderly configurations drop off so quickly as our space of configurations approach infinity, then this shows that infinite resources actually make it extremely easy to discriminate in favor of ID (Intelligent Design) when faced with an orderly configuration. Thus, intelligent design detection becomes more effective as the probabilistic resources increase.
What Would The World Look Like If Atheism Were Actually True? – video
When Nothing Created Everything? A humorous account of the atheist's creation myth
Materialists also 'use to' try to find a place for random blind chance to hide by proposing a universe which expands and contracts (recycles) infinitely. Even at first glance, the 'recycling universe' conjecture suffers so many questions from the second law of thermodynamics (entropy) as to render it effectively implausible as a serious theory, but now the recycling universe conjecture has been totally crushed by the hard evidence for a 'flat' universe found by the 'BOOMERANG' experiment.
Refutation Of Oscillating Universe - Michael Strauss PhD. - video:
Evidence For Flat Universe - Boomerang Project
Did the Universe Hyperinflate? - Hugh Ross - April 2010
Einstein's 'Biggest Blunder' Turns Out to Be Right - November 2010
Excerpt: By providing more evidence that the universe is flat, the findings bolster the cosmological constant model for dark energy over competing theories such as the idea that the general relativity equations for gravity are flawed. "We have at this moment the most precise measurements of lambda that a single technique can give," Marinoni said.
A 'flat universe', which is actually another very surprising finely-tuned 'coincidence' of the universe, means this universe, left to its own present course of accelerating expansion due to Dark Energy, will continue to expand forever, thus fulfilling the thermodynamic equilibrium of the second law to its fullest extent (entropic 'Heat Death' of the universe).
The Future of the Universe
Excerpt: After all the black holes have evaporated, (and after all the ordinary matter made of protons has disintegrated, if protons are unstable), the universe will be nearly empty. Photons, neutrinos, electrons and positrons will fly from place to place, hardly ever encountering each other. It will be cold, and dark, and there is no known process which will ever change things. --- Not a happy ending.
The End Of Cosmology? - Lawrence M. Krauss and Robert J. Scherrer
Excerpt: We are led inexorably to a very strange conclusion. The window during which intelligent observers can deduce the true nature of our expanding universe might be very short indeed.
Psalm 102:25-27
Big Rip
Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future.
Thermodynamic Argument Against Evolution - Thomas Kindell - video
entire video:
Does God Exist? The End Of Christianity - Finding a Good God in an Evil World - video
Romans 8:18-21
The only hard evidence there is, the stunning precision found in the transcendent universal constants, points overwhelmingly to intelligent design by a transcendent Creator who originally established what the transcendent universal constants of physics could and would do during the creation of the universe. The hard evidence left no room for the blind chance of natural laws in this universe. Thus, materialism was forced into appealing to an infinity of un-testable universes for it was left with no footing in this universe. These developments in science make it seem like materialism was cast into the abyss of nothingness in so far as rationally explaining the fine-tuning of the universe.
As well as the universe having a transcendent beginning, thus confirming the Theistic postulation in Genesis 1:1, the following recent discovery of a 'Dark Age' for the early universe uncannily matches up with the Bible passage in Job 38:4-11.
For the first 400,000 years of our universe’s expansion, the universe was a seething maelstrom of energy and sub-atomic particles. This maelstrom was so hot, that sub-atomic particles trying to form into atoms would have been blasted apart instantly, and so dense, light could not travel more than a short distance before being absorbed. If you could somehow live long enough to look around in such conditions, you would see nothing but brilliant white light in all directions. When the cosmos was about 400,000 years old, it had cooled to about the temperature of the surface of the sun. The last light from the "Big Bang" shone forth at that time. This "light" is still detectable today as the Cosmic Background Radiation.
This 400,000 year old “baby” universe entered into a period of darkness. When the dark age of the universe began, the cosmos was a formless sea of particles. By the time the dark age ended, a couple of hundred million years later, the universe lit up again by the light of some of the galaxies and stars that had been formed during this dark era. It was during the dark age of the universe that the heavier chemical elements necessary for life, carbon, oxygen, nitrogen and most of the rest, were first forged, by nuclear fusion inside the stars, out of the universe’s primordial hydrogen and helium.
It was also during this dark period of the universe the great structures of the modern universe were first forged. Super-clusters, of thousands of galaxies stretching across millions of light years, had their foundations laid in the dark age of the universe. During this time the infamous “missing dark matter”, was exerting more gravity in some areas than in other areas; drawing in hydrogen and helium gas, causing the formation of mega-stars. These mega-stars were massive, weighing in at 20 to more than 100 times the mass of the sun. The crushing pressure at their cores made them burn through their fuel in only a million years. It was here, in these short lived mega-stars under these crushing pressures, the chemical elements necessary for life were first forged out of the hydrogen and helium. The reason astronomers can’t see the light from these first mega-stars, during this dark era of the universe’s early history, is because the mega-stars were shrouded in thick clouds of hydrogen and helium gas. These thick clouds prevented the mega-stars from spreading their light through the cosmos as they forged the elements necessary for future life to exist on earth. After about 200 million years, the end of the dark age came to the cosmos. The universe was finally expansive enough to allow the dispersion of the thick hydrogen and helium “clouds”. With the continued expansion of the universe, the light, of normal stars and dwarf galaxies, was finally able to shine through the thick clouds of hydrogen and helium gas, bringing the dark age to a close. (How The Stars Were Born - Michael D. Lemonick),9171,1376229-2,00.html
Job 26:10
Job 38:4-11
“Where were you when I laid the foundations of the earth? Tell me if you have understanding. Who determined its measurements? Surely you know! Or who stretched a line upon it? To what were its foundations fastened? Or who laid its cornerstone, When the morning stars sang together, and all the sons of God shouted for joy? Or who shut in the sea with doors, when it burst forth and issued from the womb; When I made the clouds its garment, and thick darkness its swaddling band; When I fixed my limit for it, and set bars and doors; When I said, ‘This far you may come but no farther, and here your proud waves must stop!"
Hidden Treasures in the Book of Job - video
History of The Universe Timeline- Graph Image
As a sidelight to this, every class of elements that exists on the periodic table of elements is necessary for complex carbon-based life to exist on earth. The three most abundant elements in the human body, Oxygen, Carbon, Hydrogen, 'just so happen' to be the most abundant elements in the universe, save for helium which is inert. A truly amazing coincidence that strongly implies 'the universe had us in mind all along'. Even uranium the last naturally occurring 'stable' element on the period table of elements is necessary for life. The heat generated by the decay of uranium is necessary to keep a molten core in the earth for an extended period of time, which is necessary for the magnetic field surrounding the earth, which in turn protects organic life from the harmful charged particles of the sun. As well, uranium decay provides the heat for tectonic activity and the turnover of the earth's crustal rocks, which is necessary to keep a proper mixture of minerals and nutrients available on the surface of the earth, which is necessary for long term life on earth. (Denton; Nature's Destiny). These following articles and videos give a bit deeper insight into the crucial role that individual elements play in allowing life:
The Elements: Forged in Stars - video
Michael Denton - We Are Stardust - Uncanny Balance Of The Elements - Fred Hoyle Atheist to Deist/Theist - video
The Role of Elements in Life Processes
Periodic Table - Interactive web page for each element
Periodic Table - with stability, and native state, of elements listed
To answer our second question (What evidence is found for the earth's ability to support life?) we will consider many 'life-enabling characteristics', for the galaxy, sun, moon and earth, which establish that the earth is extremely unique in its ability to host advanced life in this universe. Again, the presumption of materialistic blind chance being the only reasonable cause must be dealt with. As opposed to the anthropic hypothesis which starts off by presuming the earth is extremely unique in this universe, materialism begins by presuming planets that are able to support life are fairly common in this universe. In fact astronomer Frank Drake (1930-present) proposed, in 1961, advanced life should be fairly common in the universe. He developed a rather crude equation called the 'Drake equation'. He plugged in some rather optimistic numbers and reasoned that ten worlds with advanced life should be in our Milky Way galaxy alone. One estimate of his worked out to roughly one trillion worlds with advanced life throughout the entire universe. Much to the disappointment of Star Trek fans, the avalanche of recent scientific evidence has found the probability of finding another planet with the ability to host advanced life in this universe is not nearly as likely as astronomer Frank Drake had originally predicted.
First, our solar system is not nearly as haphazard as some materialists would have us believe:
Weird Orbits of Neighbors Can Make 'Habitable' Planets Not So Habitable - May 2010
Thank God for Jupiter - July 2010
Excerpt: The July 16, 1994 and July 19, 2009 collision events on Jupiter demonstrate just how crucial a role the planet plays in protecting life on Earth. Without Jupiter’s gravitational shield our planet would be pummeled by frequent life-exterminating events. Yet Jupiter by itself is not an adequate shield. The best protection is achieved via a specific arrangement of several gas giant planets. The most massive gas giant must be nearest to the life support planet and the second most massive gas giant the next nearest, followed by smaller, more distant gas giants. Together Jupiter, Saturn, Uranus, and Neptune provide Earth with this ideal shield.
Of Gaps, Fine-Tuning and Newton’s Solar System - Cornelius Hunter - July 2011
Excerpt: The new results indicate that the solar system could become unstable if diminutive Mercury, the inner most planet, enters into a dance with Jupiter, the fifth planet from the Sun and the largest of all. The resulting upheaval could leave several planets in rubble, including our own. Using Newton’s model of gravity, the chances of such a catastrophe were estimated to be greater than 50/50 over the next 5 billion years. But interestingly, accounting for Albert Einstein’s minor adjustments (according to his theory of relativity), reduces the chances to just 1%.
Milankovitch Cycle Design - Hugh Ross - August 2011
Excerpt: In all three cases, Waltham proved that the actual Earth/Moon/solar system manifests unusually low Milankovitch levels and frequencies compared to similar alternative systems. ,,, Waltham concluded, “It therefore appears that there has been anthropic selection for slow Milankovitch cycles.” That is, it appears Earth was purposely designed with slow, low-level Milankovitch cycles so as to allow humans to exist and thrive.
Astrobiology research is revealing the high specificity and interdependence of the local parameters required for a habitable environment. These two features of the universe make it unlikely that environments significantly different from ours will be as habitable. At the same time, physicists and cosmologists have discovered that a change in a global parameter can have multiple local effects. Therefore, the high specificity and interdependence of local tuning and the multiple effects of global tuning together make it unlikely that our tiny island of habitability is part of an archipelago. Our universe is a small target indeed.
Astronomer Guillermo Gonzalez - P. 625, The Nature of Nature
Among Darwin Advocates, Premature Celebration over Abundance of Habitable Planets - September 2011
Excerpt: Today, such processes as planet formation details, tidal forces, plate tectonics, magnetic field evolution, and planet-planet, planet-comet, and planet-asteroid gravitational interactions are found to be relevant to habitability.,,, What's more, not only are more requirements for habitability being discovered, but they are often found to be interdependent, forming a (irreducibly) complex "web." This means that if a planetary system is found not to satisfy one of the habitability requirements, it may not be possible to compensate for this deficit by adjusting a different parameter in the system. - Guillermo Gonzalez
In fact when trying to take into consideration all the different factors necessary to make life possible on any earth-like planet, we learn some very surprising things:
Privileged Planet Principle - Michael Strauss - video
Privileged Planet Principle - Scot Pollock (Notes In Description) - video
There are many independent characteristics required to be fulfilled for any planet to host advanced carbon-based life. Two popular books have recently been written, 'The Privileged Planet' by Guillermo Gonzalez and 'Rare Earth' by Donald Brownlee, indicating the earth is extremely unique in its ability to host advanced life in this universe. Privileged Planet, which holds that any life supporting planet in the universe will also be 'privileged' for observation of the universe, has now been made into a excellent video.
The Privileged Planet - video
Privileged Planet - Observability Correlation - Gonzalez and Richards - video
The very conditions that make Earth hospitable to intelligent life also make it well suited to viewing and analyzing the universe as a whole.
- Jay Richards
A few videos of related 'observability correlation' interest;
Continued notes:
Our Privileged Planet (1 of 5) - Guillermo Gonzalez - video lecture
Guillermo Gonzalez & Stephen Meyer on Coral Ridge - video (Part 1)
Guillermo Gonzalez & Stephen Meyer on Coral Ridge - video (Part 2)
Fine Tuning Of The Universe - Privileged Planet (Notes In Description) - video
There is also a well researched statistical analysis of the many independent 'life-enabling characteristics' that gives strong mathematical indication that the earth is extremely unique in its ability to support complex life in this universe and shows, from a naturalistic perspective, that a life permitting planet is extremely unlikely to 'accidentally emerge' in the universe. The statistical analysis, which is actually a extreme refinement of the Drake's probability equation, is dealt with by astro-physicist Dr. Hugh Ross (1945-present) in his paper 'Probability for Life on Earth'.
Probability For Life On Earth - List of Parameters, References, and Math - Hugh Ross
A few of the items in Dr. Ross's "life-enabling characteristics" list are; Planet location in a proper galaxy's 'habitable zone'; Parent star size; Surface gravity of planet; Rotation period of planet; Correct chemical composition of planet; Correct size for moon; Thickness of planets’ crust; Presence of magnetic field; Correct and stable axis tilt; Oxygen to nitrogen ratio in atmosphere; Proper water content of planet; Atmospheric electric discharge rate; Proper seismic activity of planet; Many complex cycles necessary for a stable temperature history of planet; Translucent atmosphere; Various complex, and inter-related, cycles for various elements etc.. etc.. I could go a lot further in details for there are a total of 322 known parameters on his list, (816 in his updated list), which have to be met for complex life to be possible on Earth, or on a planet like Earth. Individually, these limits are not that impressive but when we realize ALL these limits have to be met at the same time and not one of them can be out of limits for any extended period of time, then the condition becomes 'irreducibly complex' and the probability for a world which can host advanced life in this universe becomes very extraordinary. Here is the final summary of Dr. Hugh Ross's 'conservative' estimate for the probability of another life-hosting world in this universe.
Probability for occurrence of all 322 parameters =10^-388
Dependency factors estimate =10^96
Longevity requirements estimate =10^14
Probability for occurrence of all 322 parameters = 10^-304
Maximum possible number of life support bodies in universe =10^22
Thus, less than 1 chance in 10^282 (million trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion) exists that even one such life-support body would occur anywhere in the universe without invoking divine miracles.
Dr. Hugh Ross, and his team, have now drastically refined this probability of 1 in 10^304 to a staggering probability of 1 in 10^1054:
Does the Probability for ETI = 1?
Excerpt; On the Reasons To Believe website we document that the probability a randomly selected planet would possess all the characteristics intelligent life requires is less than 10^-304. A recent update that will be published with my next book, Hidden Purposes: Why the Universe Is the Way It Is, puts that probability at 10^-1054.
Linked from Appendix C from Dr. Ross's book, 'Why the Universe Is the Way It Is';
Probability for occurrence of all 816 parameters ≈ 10^-1333
dependency factors estimate ≈ 10^324
longevity requirements estimate ≈ 10^45
Probability for occurrence of all 816 parameters ≈ 10^-1054
Maximum possible number of life support bodies in observable universe ≈ 10^22
Thus, less than 1 chance in 10^1032 exists that even one such life-support body would occur anywhere in the universe without invoking divine miracles.
Hugh Ross - Evidence For Intelligent Design Is Everywhere (10^-1054) - video
Isaiah 40:28
Hugh Ross - Four Main Research Papers
"This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent Being. … This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called “Lord God” παντοκρατωρ [pantokratòr], or “Universal Ruler”… The Supreme God is a Being eternal, infinite, absolutely perfect."
Sir Isaac Newton - Quoted from what many consider the greatest science masterpiece of all time, his book "Principia"
Related note:
The Loneliest Planet - ALAN HIRSHFELD - December 2011
Excerpt: While he cannot prove a galaxy-wide absence of other civilizations, he presents an array of modern, research-based evidence that renders that conclusion eminently reasonable.
The following is another surprising Privileged Planet parameter which fairly recently came to light:
Cosmic Rays Hit Space Age High
Excerpt: "The entire solar system from Mercury to Pluto and beyond is surrounded by a bubble of solar magnetism called "the heliosphere."
The Protective Boundaries of our Solar System - NASA IBEX - video
Many people simply presume that solar system formation is fairly well understood by science but that simply is not the case:
Are Saturn’s Rings Evolving? July - 2010
Excerpt: Not all is well in theories of planet formation, though. Astrobiology Magazine complained this week that many of the exoplanets discovered around other stars do not fit theories of the origin of the solar system.
Planet-Making Theories Don’t Fit Extrasolar Planets;
Excerpt: “The more new planets we find, the less we seem to know about how planetary systems are born, according to a leading planet hunter.” We cannot apply theories that fit our solar system to other systems:
Medium size worlds upset “Earth is not unique” planet modelling - January 2012
The solar systems that scientists are currently finding, in our corner of the universe, simply do not match the 'predictions':
Exoplanet Hunters Fail Predictions – August 2010
Excerpt: There are so many surprises in this field—almost nothing is turning out as we expected. There are Jupiter-mass planets in three-day orbits. There are planets with masses that are between those of the terrestrial planets in our solar system and the gas giants in the outer part of our solar system. There are Jupiter-mass planets with hugely inflated radii—at densities far lower than what we thought were possible for a gas-giant planet. There are giant planets with gigantic solid cores that defy models of planet formation, which say there shouldn’t be enough solids available in a protoplanetary disk to form a planet that dense. There are planets with tilted orbits. There are planets that orbit the poles of their stars, in so-called circumpolar orbits. There are planets that orbit retrograde—that is, they orbit in the opposite direction of their star’s rotation. There are systems of planets that are in configurations that are hard to describe given our understanding of planet formation. For instance, some planets are much too close to one another.
But a lot of those surprises have to do with the fact that we have only one example of a planetary system—our solar system—to base everything on, right?
What’s interesting is that we’ve found very little that resembles our example.
Did cosmic collisions make habitable planets rare? - August 2011
Excerpt: Most of the planets in our own solar system, including Earth, have relatively circular orbits and are lined up along a plane that isn't tilted much from the sun's equator. They also orbit in the same direction around the sun as our star spins. But many other solar systems are not so neatly ordered, harboring planets that move in the opposite direction of their stars' spin on highly tilted orbits.
Man in the moon looking younger - August 17, 2011
Excerpt: "The extraordinarily young age of this lunar sample either means that the Moon solidified significantly later than previous estimates, or that we need to change our entire understanding of the Moon's geochemical history," Carlson said.
Habitable Zones Constrained by Tides
New Definition Could Further Limit Habitable Zones Around Distant Suns: - June 2009
... liquid water is essential for life, but a planet also must have plate tectonics to pull excess carbon from its atmosphere and confine it in rocks to prevent runaway greenhouse warming. Tectonics, or the movement of the plates that make up a planet's surface, typically is driven by radioactive decay in the planet's core, but a star's gravity can cause tides in the planet, which creates more energy to drive plate tectonics.... Barnes added, "The bottom line is that tidal forcing is an important factor that we are going to have to consider when looking for habitable planets."
Tidal forces could squeeze out planetary water - February 2012
Excerpt: Alien planets might experience tidal forces powerful enough to remove all their water, leaving behind hot, dry worlds like Venus, researchers said. These findings might significantly affect searches for habitable exoplanets, scientists explained. Although some planets might dwell in regions around their star friendly enough for life as we know it, they could actually be lifelessly dry worlds. ,,, After a tidal Venus loses all its water and becomes uninhabitable, the tides could alter its orbit so that it no longer experiences tidal heating. As such, it might no longer appear like a tidal Venus, but look just like any other world in its star's habitable zone, fooling researchers into thinking it is potentially friendly for life, even though it has essentially been sterilized.
A Renewed Concern: Flares and Astrobiology - January 2011
Excerpt: “Such powerful flares bode ill for any possible biology, life, on any planet that happens to be close to that flaring star. It’s extraordinary to think that the most numerous stars, the smallest ones in our galaxy, pose this threat to life.”
Many Stars Are Planet Destroyers - September 2010
Excerpt: A NASA study is being called “Bad news for planet hunters.” A survey of stars in globular clusters has not turned up the number of planets expected. Astronomers conclude that stars in these presumably ancient clusters have long since devoured their planets or sent them careening out into oblivion.
As well, tectonic activity, which is itself finely tuned for life on earth, is not nearly as well understood by science as many people think:
Dominant paradigms in science and their attendant anomalies - David Tyler - July 2010
Excerpt: The relative contributions made by these different forces have been much discussed by scientists developing plate tectonic theory. However, firm conclusions have not been reached. If there is any consensus, it is that boundary forces are more significant than drag forces, and that slab pull is more significant than ridge push.
As well, the prevailing 'impact theory', for how our life-enabling moon is hypothesized to have been formed, is not nearly as well established as some people think:
Researchers discover water on the moon is widespread, similar to Earth's - July 2010
In related note about water on Mars:
Surface of Mars an unlikely place for life after 600-million-year drought, say scientists - February 2012
Excerpt: The results of the soil analysis at the Phoenix site suggest the surface of Mars has been arid for hundreds of millions of years, despite the presence of ice and the fact that previous research has shown that Mars may have had a warmer and wetter period in its earlier history more than three billion years ago. The team also estimated that the soil on Mars had been exposed to liquid water for at most 5,000 years since its formation billions of years ago. They also found that Martian and Moon soil is being formed under the same extremely dry conditions.
Satellite images and previous studies have proven that the soil on Mars is uniform across the planet, which suggests that the results from the team's analysis could be applied to all of Mars. This implies that liquid water has been on the surface of Mars for far too short a time for life to maintain a foothold on the surface.
In further evidence for the 'Privileged Planet'; relative element abundances, complex symbiotic chemistry, water, and the fine tuning of light for carbon based life on earth, all display extraordinary characteristics of design which also lend strong support to the Privileged Planet principle:
It is found that not only must the right chemicals be present on earth to have life, the chemicals must also be present on the earth in 'specific abundances'.
Elemental Evidence of Earth’s Divine Design - Hugh Ross PhD. - April 2010
Table: Earth’s Anomalous Abundances - Page 8
carbon* 1,200 times less
nitrogen* 2,400 times less
fluorine* 50 times more
sodium* 20 times more
aluminum 40 times more
phosphorus* 4 times more
sulfur* 60 times less
potassium* 90 times more
calcium 20 times more
titanium 65 times more
vanadium* 9 times more
chromium* 5 times less
nickel* 20 times less
cobalt* 5 times less
selenium* 30 times less
yttrium 50 times more
zirconium 130 times more
niobium 170 times more
molybdenum* 5 times more
tin* 3 times more
iodine* 3 times more
gold 5 times less
lead 170 times more
uranium 16,000 times more
thorium 23,000 times more
water 250 times less
Compositions of Extrasolar Planets - July 2010
Excerpt: ,,,the presumption that extrasolar terrestrial planets will consistently manifest Earth-like chemical compositions is incorrect. Instead, the simulations revealed “a wide variety of resulting planetary compositions.
Chances of Exoplanet Life ‘Impossible’? Or ’100 percent’? - February 2011
Excerpt: Howard Smith, an astrophysicist at Harvard University, made the headlines earlier this year when he announced, rather pessimistically, that aliens will unlikely exist on the extrasolar planets we are currently detecting. “We have found that most other planets and solar systems are wildly different from our own. They are very hostile to life as we know it,” “Extrasolar systems are far more diverse than we expected, and that means very few are likely to support life,” he said.
Elements of ExoPlanets - February 2012
The stunning long term balance of the necessary chemicals for life, on the face of the earth, is a wonder in and of itself:
Chemical Cycles:
Long term chemical balance is essential for life on earth. Complex symbiotic chemical cycles keep the amount of elements on the earth surface in relatively perfect balance and thus in steady supply to the higher life forms that depend on them to remain stable. This is absolutely essential for the higher life forms to exist on Earth for any extended period of time.
Carbon and Nitrogen Cycles - music video
Carbon Cycle - Illustration
When we look at water, the most common substance on earth and in our bodies, we find many odd characteristics which clearly appear to be designed. These oddities are absolutely essential for life on earth. Some simple life can exist without the direct energy of sunlight, some simple life can exist without oxygen; but no life can exist without water. Water is called a universal solvent because it has the unique ability to dissolve a far wider range of substances than any other solvent. This 'universal solvent' ability of water is essential for the cells of living organisms to process the wide range of substances necessary for life. Another oddity is water expands as it becomes ice, by an increase of about 9% in volume. Thus, water floats when it becomes a solid instead of sinking. This is an exceedingly rare ability. Yet if it were not for this fact, lakes and oceans would freeze from the bottom up. The earth would be a frozen wasteland, and human life would not be possible. Water also has the unusual ability to pull itself into very fine tubes and small spaces, defying gravity. This is called capillary action. This action is essential for the breakup of mineral bearing rocks into soil. Water pulls itself into tiny spaces on the surface of a rock and freezes; it expands and breaks the rock into tinier pieces, thus producing soil. Capillary action is also essential for the movement of water through soil to the roots of plants. It is also essential for the movement of water from the roots to the tops of the plants, even to the tops of the mighty redwood trees,,,
Towering Giants Of Teleological Beauty - October 2010
,,,Capillary action is also essential for the circulation of the blood in our very own capillary blood vessels. Water's melting and boiling point are not where common sense would indicate they should be when we look at its molecular weight. The three sister compounds of water all behave as would be predicted by their molecular weight. Oddly, water just happens to have melting and boiling points that are of optimal biological utility. The other properties of water we measure, like its specific slipperiness (viscosity) and its ability to absorb and release more heat than any other natural substance, have to be as they are in order for life to be possible on earth. Even the oceans have to be the size they are in order to stabilize the temperature of the earth so human life may be possible. On and on through each characteristic we can possibly measure water with, it turns out to be required to be almost exactly as it is or complex life on this earth could not exist. No other liquid in the universe comes anywhere near matching water in its fitness for life (Denton: Nature's Destiny).
Here is a more complete list of the anomalous life enabling properties of water:
Anomalous life enabling properties of water
Water's remarkable capabilities - December 2010 - Peer Reviewed
Excerpt: All these traits are contained in a simple molecule of only three atoms. One of the most difficult tasks for an engineer is to design for multiple criteria at once. ... Satisfying all these criteria in one simple design is an engineering marvel. Also, the design process goes very deep since many characteristics would necessarily be changed if one were to alter fundamental physical properties such as the strong nuclear force or the size of the electron.
Water's quantum weirdness makes life possible - October 2011
Excerpt: They found that the hydrogen-oxygen bonds were slightly longer than the deuterium-oxygen ones, which is what you would expect if quantum uncertainty was affecting water’s structure. “No one has ever really measured that before,” says Benmore.
We are used to the idea that the cosmos’s physical constants are fine-tuned for life. Now it seems water’s quantum forces can be added to this “just right” list.
Water cycle song - music video
Although water is semi-famous for its many mysterious and 'miraculous' characteristics that enable physical life to be possible on earth. This following article goes even deeper than the 'science of water' to reveal many mysterious 'spiritual characteristics' of water found in the Bible that enable a deeper 'spiritual life' to even be possible.
WATER, as a metaphor (in the Bible)
Visible light is also incredibly fine-tuned for life to exist. Though visible light is only a tiny fraction of the total electromagnetic spectrum coming from the sun, it happens to be the "most permitted" portion of the sun's spectrum allowed to filter through the our atmosphere. All the other bands of electromagnetic radiation, directly surrounding visible light, happen to be harmful to organic molecules, and are almost completely absorbed by the atmosphere. The tiny amount of harmful UV radiation, which is not visible light, allowed to filter through the atmosphere is needed to keep various populations of single cell bacteria from over-populating the world (Ross; The size of light's wavelengths and the constraints on the size allowable for the protein molecules of organic life, also seem to be tailor-made for each other. This "tailor-made fit" allows photosynthesis, the miracle of sight, and many other things that are necessary for human life. These specific frequencies of light (that enable plants to manufacture food and astronomers to observe the cosmos) represent less than 1 trillionth of a trillionth (10^-24) of the universe's entire range of electromagnetic emissions. Like water, visible light also appears to be of optimal biological utility (Denton; Nature's Destiny).
Extreme Fine Tuning of Light for Life and Scientific Discovery - video
Fine Tuning Of Universal Constants, Particularly Light - Walter Bradley - video
Fine Tuning Of Light to the Atmosphere, to Biological Life, and to Water - graphs
Intelligent Design - Light and Water - video
Proverbs 3:19
"The Lord by wisdom founded the earth: by understanding He established the heavens;"
The scientific evidence clearly indicates the earth is extremely unique in this universe in its ability to support life. These facts are rigorously investigated and cannot be dismissed out of hand as some sort of glitch in accurate information. Here materialism can offer no competing theory of blind chance which can offset the overwhelming evidence for the earth's apparent intelligent design which enables her to host complex life. A materialist can only assert we are extremely 'lucky'. This is some kind of fantastic luck materialists believe. The odds of another life-supporting earth 'just so happening' in this universe (1 in 10^1054) are not even remotely as good as the odds a blind man would have in finding one pre-selected grain of sand, which has been hidden in all vast expanses of deserts and beaches of the world, with only one try, and then the blind man repeatedly finding the grain of sand, first time every time, several times over! These fantastic odds against another life-supporting world 'just so happening' in this universe have not even been refined to their final upper limits yet. The odds will only get far worse for the atheistic materialist.,,, When faced with such staggering odds against life 'just so happening' elsewhere in the universe, I find the Search for Extra-Terrestrial Intelligence by SETI to be amusing:
SETI - Search For Extra-Terrestrial Intelligence receives message from God,,,,, Almost - video
I find it strange that the SETI (Search for Extra-Terrestrial Intelligence) organization spends millions of dollars vainly searching for signs of extra-terrestrial life in this universe, when all anyone has to do to make solid contact with THE primary 'extra-terrestrial intelligence' of the entire universe is to pray with a sincere heart. God certainly does not hide from those who sincerely seek Him. Actually communicating with the Creator of the universe is certainly a lot more exciting than not communicating with some little green men that in all probability do not even exist, unless of course, God decided to create them!
Isaiah 45:18-19
For thus says the Lord, who created the heavens, who is God, who formed the earth and made it, who established it, who did not create it in vain, who formed it to be inhabited: “I am the Lord, and there is no other. I have not spoken in secret, in a dark place of the earth; I did not say to the seed of Jacob, ‘seek me in vain’; I, the Lord speak righteousness, I declare things that are right.”
“When I was young, I said to God, 'God, tell me the mystery of the universe.' But God answered, 'That knowledge is for me alone.' So I said, 'God, tell me the mystery of the peanut.' Then God said, 'Well George, that's more nearly your size.' And he told me.”
George Washington Carver
Inventors - George Washington Carver
Excerpt: "God gave them to me" he (Carver) would say about his ideas, "How can I sell them to someone else?"
Hearing God – Are We Listening? – video
To answer our third question (What evidence is found for the first life on earth?) we will look at the evidence for the first appearance of life on earth and the chemical activity of the first bacterial life on earth. Once again, the presumption of materialistic blind chance being the only reasonable cause must be dealt with.
First and foremost, we now have evidence for photosynthetic life suddenly appearing on earth, as soon as water appeared on the earth, in the oldest sedimentary rocks ever found on earth.
The Sudden Appearance Of Photosynthetic Life On Earth - video
Team Claims It Has Found Oldest Fossils By NICHOLAS WADE - August 2011
Excerpt: Rocks older than 3.5 billion years have been so thoroughly cooked as to destroy all cellular structures, but chemical traces of life can still be detected. Chemicals indicative of life have been reported in rocks 3.5 billion years old in the Dresser Formation of Western Australia and, with less certainty, in rocks 3.8 billion years old in Greenland.
Earliest (Bacteria) fossils found in Australia, 3.4 bya
Dr. Hugh Ross - Origin Of Life Paradox - video
Archaean Microfossils and the Implications for Intelligent Design - August 2011
Excerpt: This dramatically limits the amount of time, and thus the probablistic resources, available to those who wish to invoke purely unguided and purposeless material processes to explain the origin of life.
Could Impacts Jump-Start the Origin of Life? - Hugh Ross - article
Moreover, the atmosphere is found not to be 'reducing' on the early earth as is commonly taught in materialistic origin of life scenarios;
Time to end speculation about a reducing atmosphere for the early Earth - David Tyler - December 2011
Excerpt: Using zircons dated to almost 4.4 Ga, the researchers have analysed their redox state (a measure of the degree of oxygenation of the mineral).,,, "[In] this issue, Trail et al. report their analysis of the sole mineral survivors of the Hadean, zircon samples more than 4 billion years old. Their findings allowed them to determine the 'fugacity' of oxygen in Hadean magmatic melts, a quantity that acts as a measure of magmatic redox conditions. Unexpectedly, the zircons record oxygen fugacities identical to those in the present-day mantle, leading the authors to conclude that Hadean volcanic gases were as highly oxidized as those emitted today."
Late Heavy Bombardment - graph
Life - Its Sudden Origin and Extreme Complexity - Dr. Fazale Rana - video
The evidence scientists have discovered in the geologic record is stunning in its support of the anthropic hypothesis. The oldest sedimentary rocks on earth, known to science, originated underwater (and thus in relatively cool environs) 3.86 billion years ago. Those sediments, which are exposed at Isua in southwestern Greenland, also contain the earliest chemical evidence (fingerprint) of 'photosynthetic' life [Nov. 7, 1996, Nature]. This evidence had been fought by materialists since it is totally contrary to their evolutionary theory. Yet, Danish scientists were able to bring forth another line of geological evidence to substantiate the primary line of geological evidence for photo-synthetic life in the earth’s earliest sedimentary rocks.
U-rich Archaean sea-floor sediments from Greenland - indications of +3700 Ma oxygenic photosynthesis (2003)
Moreover, evidence for 'sulfate reducing' bacteria has been discovered alongside the evidence for photosynthetic bacteria:
When Did Life First Appear on Earth? - Fazale Rana - December 2010
Excerpt: The primary evidence for 3.8 billion-year-old life consists of carbonaceous deposits, such as graphite, found in rock formations in western Greenland. These deposits display an enrichment of the carbon-12 isotope. Other chemical signatures from these formations that have been interpreted as biological remnants include uranium/thorium fractionation and banded iron formations. Recently, a team from Australia argued that the dolomite in these formations also reflects biological activity, specifically that of sulfate-reducing bacteria.
Thus we now have fairly conclusive evidence for bacterial life in the oldest sedimentary rocks ever found by scientists on earth.
On the third page of this following site there is a illustration that shows some of the interdependent, ‘life-enabling’, biogeochemical complexity of different types of bacterial life on Earth.,,,
Microbial Mat Ecology – Image on page 92 (third page down)
,,,Please note, that if even one type of bacteria group did not exist in this complex cycle of biogeochemical interdependence, that was illustrated on the third page of the preceding site, then all of the different bacteria would soon die out. This essential biogeochemical interdependence, of the most primitive different types of bacteria that we have evidence of on ancient earth, makes the origin of life ‘problem’ for neo-Darwinists that much worse. For now not only do neo-Darwinists have to explain how the ‘miracle of life’ happened once with the origin of photosynthetic bacteria, but they must now also explain how all these different types bacteria, that photosynthetic bacteria are dependent on, in this irreducibly complex biogeochemical web, miraculously arose just in time to supply the necessary nutrients, in their biogeochemical link in the chain, for photosynthetic bacteria to continue to survive. As well, though not clearly illustrated in the illustration on the preceding site, please note that a long term tectonic cycle, of the turnover the Earth’s crustal rocks, must also be fine-tuned to a certain degree with the bacteria and thus plays a important ‘foundational’ role in the overall ecology of the biogeochemical system that must be accounted for as well.
As a side issue to these complex interdependent biogeochemical relationships, of the 'simplest' bacteria on Earth, that provide the foundation for a 'friendly' environment on Earth that is hospitable to higher lifeforms above them to eventually appear on earth, it is interesting to note man's failure to build a miniature, self-enclosed, ecology in which humans could live for any extended periods of time.
Biosphere 2 – What Went Wrong?
Excerpt: Other Problems
Biosphere II’s water systems became polluted with too many nutrients. The crew had to clean their water by running it over mats of algae, which they later dried and stored.
Also, as a symptom of further atmospheric imbalances, the level of dinitrogen oxide became dangerously high. At these levels, there was a risk of brain damage due to a reduction in the synthesis of vitamin B12.
The simplest photosynthetic life on earth is exceedingly complex, too complex to happen by accident even if the primeval oceans had been full of pre-biotic soup.
The Miracle Of Photosynthesis - electron transport - video
Electron transport and ATP synthesis during photosynthesis - Illustration
There is actually a molecular motor, that surpasses man made motors in engineering parameters, that is integral to the photosynthetic process:
Evolution vs ATP Synthase - Molecular Machine - video
The ATP Synthase Enzyme - an exquisite motor necessary for first life - video
The photosynthetic process is clearly a irreducible complex condition:
"There is no question about photosynthesis being Irreducibly Complex. But it’s worse than that from an evolutionary perspective. There are 17 enzymes alone involved in the synthesis of chlorophyll. Are we to believe that all intermediates had selective value? Not when some of them form triplet states that have the same effect as free radicals like O2. In addition if chlorophyll evolved before antenna proteins, whose function is to bind chlorophyll, then chlorophyll would be toxic to cells. Yet the binding function explains the selective value of antenna proteins. Why would such proteins evolve prior to chlorophyll? and if they did not, how would cells survive chlorophyll until they did?" Uncommon Descent Blogger
Evolutionary biology: Out of thin air John F. Allen & William Martin:
The measure of the problem is here: “Oxygenetic photosynthesis involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell."
Of note: anoxygenic (without oxygen) photosynthesis is even more of a complex chemical pathway than oxygenic photosynthesis is:
"Remarkably, the biosynthetic routes needed to make the key molecular component of anoxygenic photosynthesis are more complex than the pathways that produce the corresponding component required for the oxygenic form."; - Fazale Rana
Also of note: Anaerobic organisms, that live without oxygen, and most viruses are quickly destroyed by direct contact with oxygen.
In what I find to be a very fascinating discovery, it is found that photosynthetic life, which is an absolutely vital link that all higher life on earth is dependent on for food, uses ‘non-local’ quantum mechanical principles to accomplish photosynthesis. Moreover, this is direct evidence that a non-local, beyond space-time mass-energy, cause must be responsible for ‘feeding’ all life on earth, since all higher life on earth is eventually completely dependent on the non-local ‘photosynthetic energy’ in which to live their lives on this earth:
Non-Local Quantum Entanglement In Photosynthesis - video with notes in description
Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Gregory S. Engel, Nature (12 April 2007)
Photosynthetic complexes are exquisitely tuned to capture solar light efficiently, and then transmit the excitation energy to reaction centres, where long term energy storage is initiated.,,,, This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path. ---- Conclusion? Obviously Photosynthesis is a brilliant piece of design by "Someone" who even knows how quantum mechanics works.
Quantum Mechanics at Work in Photosynthesis: Algae Familiar With These Processes for Nearly Two Billion Years - Feb. 2010
Excerpt: "We were astonished to find clear evidence of long-lived quantum mechanical states involved in moving the energy. Our result suggests that the energy of absorbed light resides in two places at once -- a quantum superposition state, or coherence -- and such a state lies at the heart of quantum mechanical theory.",,, "It suggests that algae knew about quantum mechanics nearly two billion years before humans," says Scholes.
Life Masters Physics - Feb. 2010
Excerpt: Collini et al.2 report evidence suggesting that a process known as quantum coherence ‘wires’ together distant molecules in the light-harvesting apparatus of marine cryptophyte algae.,,,“Intriguingly, recent work has documented that light-absorbing molecules in some photosynthetic proteins capture and transfer energy according to quantum-mechanical probability laws instead of classical laws at temperatures up to 180 K,”. ,,, “This contrasts with the long-held view that long-range quantum coherence between molecules cannot be sustained in complex biological systems, even at low temperatures.”
Materialists have tried to get around this crushing evidence for the sudden appearance of extremely complex, and elegant, photosynthetic life in the oldest sedimentary rocks ever found on earth by suggesting life could have originated in extreme conditions at hydrothermal vents. Yet, ignoring the fact that hydrothermal vents were themselves submerged in the water that produced the earliest sedimentary rocks that we find evidence for photosynthetic life in, the materialists are once again betrayed by the empirical evidence:
Refutation Of Hyperthermophile Origin Of Life scenario
The origin of life--did it occur at high temperatures?
Chemist explores the membranous origins of the first living cell:
Nick Lane Takes on the Origin of Life and DNA - Jonathan McLatchie - July 2010
Origin-of-Life Theorists Fail to Explain Chemical Signatures in the Cell - Casey Luskin - February 15, 2012
(Stephen Meyer - Signature in the Cell, p. 347)
Besides hydrothermal vents, it is also commonly, and erroneously, presumed in many grade school textbooks that life slowly arose in a primordial ocean of pre-biotic soup. Yet there are no chemical signatures in the geologic record indicating that a ocean of this pre-biotic soup ever existed. In fact, as stated earlier, the evidence indicates that complex photosynthetic life appeared on earth as soon as water appeared on earth with no chemical signature whatsoever of prebiotic activity.
The Primordial Soup Myth:
Excerpt: "Accordingly, Abelson(1966), Hull(1960), Sillen(1965), and many others have criticized the hypothesis that the primitive ocean, unlike the contemporary ocean, was a "thick soup" containing all of the micromolecules required for the next stage of molecular evolution. The concept of a primitive "thick soup" or "primordial broth" is one of the most persistent ideas at the same time that is most strongly contraindicated by thermodynamic reasoning and by lack of experimental support." - Sidney Fox, Klaus Dose on page 37 in Molecular Evolution and the Origin of Life.
New Research Rejects 80-Year Theory of 'Primordial Soup' as the Origin of Life - Feb. 2010
William Martin - an evolutionary biologist
Moreover, water is considered a 'universal solvent' which is a very thermodynamic obeying and thus origin of life defying fact.
Abiogenic Origin of Life: A Theory in Crisis - Arthur V. Chadwick, Ph.D.
Excerpt: The synthesis of proteins and nucleic acids from small molecule precursors represents one of the most difficult challenges to the model of prebiological evolution. There are many different problems confronted by any proposal. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors depolymerization of any molecules that might be formed. Careful experiments done in an aqueous solution with very high concentrations of amino acids demonstrate the impossibility of significant polymerization in this environment. A thermodynamic analysis of a mixture of protein and amino acids in an ocean containing a 1 molar solution of each amino acid (100,000,000 times higher concentration than we inferred to be present in the prebiological ocean) indicates the concentration of a protein containing just 100 peptide bonds (101 amino acids) at equilibrium would be 10^-338 molar. Just to make this number meaningful, our universe may have a volume somewhere in the neighborhood of 10^85 liters. At 10^-338 molar, we would need an ocean with a volume equal to 10^229 universes (100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000) just to find a single molecule of any protein with 100 peptide bonds. So we must look elsewhere for a mechanism to produce polymers. It will not happen in the ocean.
A Substantial Conundrum Confronting The Chemical Origin Of Life - August 2011
Excerpt: 1. Peptide bond formation is an endothermic reaction. This means that the reaction requires the absorption of energy: It does not take place spontaneously.
2. Peptide bond formation is a condensation reaction. It hence involves the net removal of a water molecule. So not only can this reaction not happen spontaneously in an aqueous medium, but, in fact, the presence of water inhibits the reaction.
Sea Salt only adds to this thermodynamic problem:
...even at concentrations seven times weaker than in today’s oceans. The ingredients of sea salt are very effective at dismembering membranes and preventing RNA units (monomers) from forming polymers any longer than two links (dimers). Creation Evolution News - Sept. 2002
The following article and videos have a fairly good overview of the major problems facing any naturalistic Origin Of Life scenario:
On the Origin of Life - The Insurmountable Problems Of Chemistry - Charles Thaxton PhD. - video
Evolutionary Assumptions - Life from dead chemicals? - video
"Shut up," Coyne Explained - January 2012
Excerpt: Coyne writes that Kuhn's criticisms of current origin-of-life research are "absurdly funny" -- even though such research (into the origin of life) has not led to the abiotic formation of a single functional protein, much less a living cell.
Though the 1953 Miller-Urey experiment is often touted, by evolutionists, as evidence that life can spontaneously arise on the primitive earth, evolutionists always seem to fail to mention the severe problems that were found with the 1953 Miller-Urey experiment in which, among other severe problems, only a few of the building blocks for proteins, amino-acids, were ever actually produced in minute quantities in an artificial environment:
Miller-Urey Experiment
Excerpt: While successful in trapping some amino acids, this is now recognized as not being analogous to the real natural world - there are no known or even hypothesized protective traps observed in nature. What they made was 85% tar, 13% carboxylic acid add, (both toxic to life) and only 2% amino acids. Problem: mostly only 2 of the 20 different amino acids life needed were produced, and they are much more likely to bond with the tar or acid than they are with each other. Half of the amino acids were right-handed and half were left-handed. This is a problem because all proteins are left-handed and even the smallest proteins have 70-100 amino acids all in the precise order.
Rare Amino Acid Challenge to the Origin of Life
Excerpt: Granted that on early Earth arginine and lysine are either totally missing or available only at such extremely low abundance levels as to be irrelevant, and recognizing that arginine-and-lysine-containing proteins are essential for the crucial protein-DNA interactions, naturalistic explanations for the origin of life are ruled out.
Programming of Life - Amino Acids - video
The problem of 'left handed' homochirality found in the Miller-Urey experiment is of no small concern to any Origin Of Life scenario put forth by evolutionists:
Dr. Charles Garner on the problem of Chirality in nature and Origin of Life Research - audio
Origin Of Life - Problems With Proteins - Homochirality - Charles Thaxton PhD. - video
Homochirality and Darwin - Robert Sheldon - April 2010
Excerpt: there is no abiotic path from a racemic solution to a stereo-active solution of amino acid(s) that doesn't involve a biotic chiral agent, be it chiral beads or Louis Pasteur himself. Like many critiques of ID, the problem with these "Darwinist" solutions is that they always smuggle in some information, in this case, chiral agents.
Homochirality and Darwin: part 2 - Robert Sheldon - May 2010
Excerpt: With regard to the deniers who think homochirality is not much of a problem, I only ask whether a solution requiring multiple massive magnetized black-hole supernovae doesn't imply there is at least a small difficulty to overcome? A difficulty, perhaps, that points to the non-random nature of life in the cosmos?
Left-Handed Amino Acids Explained Naturally? Not by a long shot! - January 2010
The severity of the homochirality problem begins to highlight the number one question facing any Origin Of Life research. Namely, "Where is the specified complexity (information) coming from?" Even this recent 'evolution friendly' article readily admitted the staggering level of 'specified complexity' (information) being dealt with in the first cell:
Was our oldest ancestor a proton-powered rock? - Oct. 2009
Excerpt: “There is no doubt that the progenitor of all life on Earth, the common ancestor, possessed DNA, RNA and proteins, a universal genetic code, ribosomes (the protein-building factories), ATP and a proton-powered enzyme for making ATP. The detailed mechanisms for reading off DNA and converting genes into proteins were also in place. In short, then, the last common ancestor of all life looks pretty much like a modern cell.”
So much for 'simple' life!
Colossians 1:16
I think David Abel, the director of the Gene Emergence Project, does a very good job of highlighting just how crucial 'information' is to Origin of Life research:
Chance and necessity do not explain the origin of life: Trevors JT, Abel DL.
Does New Scientific Evidence About the Origin of Life Put an End to Darwinian Evolution? - Stephen Meyer - 4 part video
The "simplest" life currently found on the earth that is able to exist outside of a test tube, the parasitic Mycoplasmal, has between a 0.56-1.38 megabase genome which results in drastically reduced biosynthetic capabilities and explains their dependence on a host. Yet even with this 'reduced complexity' we find that even the 'simplest' life on earth exceeds man's ability to produce such complexity in his computer programs or in his machines:
Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information - David L. Abel and Jack T. Trevors - Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8
Mycoplasma Genitalium - The "Simplest" Life On Earth - video
First-Ever Blueprint of 'Minimal Cell' Is More Complex Than Expected - Nov. 2009
Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae's transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation.
There’s No Such Thing as a ‘Simple’ Organism - November 2009
Simplest Microbes More Complex than Thought - Dec. 2009
Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes."
On top of the fact that we now know the genetic code of the simplest organism ever found on Earth is a highly advanced 'logic' code, which far surpasses man's ability to devise such codes, we also know for a fact no operation of logic ever performed by a computer will ever increase the algorithmic code inherent in a computer's program, i.e. Bill Gates will never use random number generators and selection software to write more highly advanced computer codes:
The 'simplest' life possible, by experiment, requires several hundred distinct interlocking protein types which further interlock with several hundred distinct genes in the DNA, which are all further interlocked with RNA, and the associated protein machinery, in an irreducibly complex manner that defies all attempts to reduce its complexity further.
William Dembski calls the DNA, RNA, Protein interlock problem :
"Irreducible Complexity on steroids".
Biological function and the genetic code are interdependent: Voie:
Life never ceases to astonish scientists as its secrets are more and more revealed. In particular the origin of life remains a mystery:
Journey Inside The Cell - DNA to mRNA to Proteins - Stephen Meyer - Signature In The Cell - video
Recently Craig Venter, of deciphering the human genome fame, created quite a stir, in the public imagination, by claiming to have 'created' synthetic life. The fact is that the claim was a gross exaggeration of what Venter's group had actually accomplished, for the truth is that they did not truly 'create' anything, not even a single protein or gene, they merely copied information that was already present in life before:
Is Craig Venter’s Synthetic Cell Really Life? - July 2010
Excerpt: David Baltimore was closer to the truth when he told the New York Times that the researchers had not created life so much as mimicked it. It might be still more accurate to say that the researchers mimicked one part and borrowed the rest.
Stephen Meyer Discusses Craig Venter's "Synthetic Life" on CBN - video
Aside from the small, but impressive, technical feat of Venter's work, in reality researchers can't even say with 100% certainty what the minimal gene set for a genome is, much less are they anywhere near creating life from scratch in the laboratory:
John I. Glass et al., "Essential Genes of a Minimal Bacterium," PNAS, USA103 (2006): 425-30.
Excerpt: "An earlier study published in 1999 estimated the minimal gene set to fall between 265 and 350. A recent study making use of a more rigorous methodology estimated the essential number of genes at 382.,,, Given the evolutionary path of extreme genome reduction taken by M. genitalium, it is likely that all its 482 protein-coding genes are in some way necessary for effective growth in its natural habitat"
Life’s Minimum Complexity Supports ID - Fazale Rana - November 2011
Excerpt page 16: The Stanford investigators determined that the essential genome of C. crescentus consisted of just over 492,000 base pairs (genetic letters), which is close to 12 percent of the overall genome size. About 480 genes comprise the essential genome, along with nearly 800 sequence elements that play a role in gene regulation.,,, When the researchers compared the C. crescentus essential genome to other essential genomes, they discovered a limited match. For example, 320 genes of this microbe’s basic genome are found in the bacterium E. coli. Yet, of these genes, over one-third are nonessential for E. coli. This finding means that a gene is not intrinsically essential. Instead, it’s the presence or absence of other genes in the genome that determine whether or not a gene is essential.,,
The following study highlights the inherent fallacy in gene deletion/knockout experiments that have led many scientists astray in the past as to underestimating what the minimal genome for life should actually be:
Minimal genome should be twice the size - 2006
Excerpt: “Previous attempts to work out the minimal genome have relied on deleting individual genes in order to infer which genes are essential for maintaining life,” said Professor Laurence Hurst from the Department of Biology and Biochemistry at the University of Bath. “This knock out approach misses the fact that there are alternative genetic routes, or pathways, to the production of the same cellular product. “When you knock out one gene, the genome can compensate by using an alternative gene. “But when you repeat the knock out experiment by deleting the alternative, the genome can revert to the original gene instead. “Using the knock-out approach you could infer that both genes are expendable from the genome because there appears to be no deleterious effect in both experiments.
Mouse Genome Knockout Experiment
Jonathan Wells on Darwinism, Science, and Junk DNA - November 2011
Excerpt: Mice without “junk” DNA. In 2004, Edward Rubin] and a team of scientists at Lawrence Berkeley Laboratory in California reported that they had engineered mice missing over a million base pairs of non-protein-coding (“junk”) DNA—about 1% of the mouse genome—and that they could “see no effect in them.”
But molecular biologist Barbara Knowles (who reported the same month that other regions of non-protein-coding mouse DNA were functional) cautioned that the Lawrence Berkeley study didn’t prove that non-protein-coding DNA has no function. “Those mice were alive, that’s what we know about them,” she said. “We don’t know if they have abnormalities that we don’t test for.”And University of California biomolecular engineer David Haussler said that the deleted non-protein-coding DNA could have effects that the study missed. “Survival in the laboratory for a generation or two is not the same as successful competition in the wild for millions of years,” he argued.
In 2010, Rubin was part of another team of scientists that engineered mice missing a 58,000-base stretch of so-called “junk” DNA. The team found that the DNA-deficient mice appeared normal until they (along with a control group of normal mice) were fed a high-fat, high-cholesterol diet for 20 weeks. By the end of the study, a substantially higher proportion of the DNA-deficient mice had died from heart disease. Clearly, removing so-called “junk” DNA can have effects that appear only later or under other circumstances.
The probabilities against life 'spontaneously' originating are simply overwhelming:
In fact Dean Kenyon, who was a leading Origin Of Life researcher as well as a college textbook author on the subject in the 1970s, admitted after years of extensive research:
"We have not the slightest chance for the chemical evolutionary origin of even the simplest of cells".
Origin Of Life? - Probability Of Protein And The Information Of DNA - Dean Kenyon - video
Probability Of A Protein and First Living Cell - Chris Ashcraft - video (notes in description)
Stephen Meyer - Proteins by Design - Doing The Math - video
Signature in the Cell - Book Review - Ken Peterson
In fact years ago Fred Hoyle arrived at approximately the same number, one chance in 10^40,000, for life spontaneously arising. From this number, Fred Hoyle compared the random emergence of the simplest bacterium on earth to the likelihood “a tornado sweeping through a junkyard might assemble a Boeing 747 therein”. Fred Hoyle also compared the chance of obtaining just one single functioning protein molecule, by chance combination of amino acids, to a solar system packed full of blind men solving Rubik’s Cube simultaneously.
Professor Harold Morowitz shows the Origin of Life 'problem' escalates dramatically over the 1 in 10^40,000 figure when working from a thermodynamic perspective,:
Dr. Don Johnson lays out some of the probabilities for life in this following video:
Probabilities Of Life - Don Johnson PhD. - 38 minute mark of video
a typical functional protein - 1 part in 10^175
the required enzymes for life - 1 part in 10^40,000
a living self replicating cell - 1 part in 10^340,000,000
Programming of Life - Probability of a Cell Evolving - video
Programming of Life - video playlist:
Also of interest is the information content that is derived in a cell when working from a thermodynamic perspective:
“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
Carl Sagan, "Life" in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, considered the most accurate measure of the transcendent quantum information/entanglement constraining a 'simple' life form to be so far out of thermodynamic equilibrium.
For calculations, from the thermodynamic perspective, please see the following site:
Moleular Biophysics – Information theory. Relation between information and entropy: - Setlow-Pollard, Ed. Addison Wesley
Further comments:
The Theist holds the Intellectual High-Ground - March 2011
Excerpt: To get a range on the enormous challenges involved in bridging the gaping chasm between non-life and life, consider the following: “The difference between a mixture of simple chemicals and a bacterium, is much more profound than the gulf between a bacterium and an elephant.” (Dr. Robert Shapiro, Professor Emeritus of Chemistry, NYU)
Scientists Prove Again that Life is the Result of Intelligent Design - Rabbi Moshe Averick - August 2011
Excerpt: “To go from bacterium to people is less of a step than to go from a mixture of amino acids to a bacterium.” - Dr. Lynn Margulis
Here is a related article with several more excellent quotes, by leading origin of life researchers, commenting on the 'problem' that the origin of life presents to 'science' (actually it is only a problem for atheists who 'believe' that 'science' equates strictly to their reductive materialistic view of reality):
Faye Flam: Atheist Writer Who is Long on Graciousness, Long on Civility… Short on Reason, Short on Scientific Realities - Rabbi Averick
The following videos give a small glimpse into how the probabilities are calculated for the origin of life:
The Origin of Life - Lecture On Probability - John Walton - Professor Of Chemistry - short video
Entire Video:
Protein Molecules and "Simple" Cells - video
Further comment:
Ilya Prigogine was an eminent chemist and physicist who received two Nobel Prizes in chemistry. Regarding the probability of life originating by accident, he said:
Ilya Prigogine, Gregoire Nicolis, and Agnes Babloyantz, Physics Today 25, pp. 23-28. (Sourced Quote)
Anyone who has debated an evolutionist, over the probability of life spontaneous arising, knows that it can be quite frustrating because eventually the person will realize that many times the evolutionist will not be reasonable on the matter and that he is operating on nothing but blind faith that life can spontaneously arise by unintelligent processes. Here are a few more links relating to the (im)probability of life:
Probability's Nature and Nature's Probability: A Call to Scientific Integrity - Donald E. Johnson
Excerpt: "one should not be able to get away with stating “it is possible that life arose from non-life by ...” or “it’s possible that a different form of life exists elsewhere in the universe” without first demonstrating that it is indeed possible (non-zero probability) using known science. One could, of course, state “it may be speculated that ... ,” but such a statement wouldn’t have the believability that its author intends to convey by the pseudo-scientific pronouncement."
Intelligent Design: Required by Biological Life? K.D. Kalinsky - Pg. 11
Excerpt: It is estimated that the simplest life form would require at least 382 protein-coding genes. Using our estimate in Case Four of 700 bits of functional information required for the average protein, we obtain an estimate of about 267,000 bits for the simplest life form. Again, this is well above Inat and it is about 10^80,000 times more likely that ID (Intelligent Design) could produce the minimal genome than mindless natural processes.
Could Chance Arrange the Code for (Just) One Gene?
Even the low end 'hypothetical' probability estimate given by a evolutionist, for life spontaneously arising, is fantastically impossible:
General and Special Evidence for Intelligent Design in Biology:
- The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of: - two rRNAs with a total size of at least 1000 nucleotides - ~10 primitive adaptors of ~30 nucleotides each, in total, ~300 nucleotides - at least one RNA encoding a replicase, ~500 nucleotides (low bound) is required. In the above notation, n = 1800, resulting in E < 10^-1018. That is, the chance of life occurring by natural processes is 1 in 10 followed by 1018 zeros. (Koonin's intent was to show that short of postulating a multiverse of an infinite number of universes (Many Worlds), the chance of life occurring on earth is vanishingly small.)
The cosmological model of eternal inflation and the transition from chance to biological evolution in the history of life - Eugene V Koonin
Origin of life both one of the hardest and most important problems in science - November 2011
Excerpt: 'Despite many interesting results to its credit, when judged by the straightforward criterion of reaching (or even approaching) the ultimate goal, the origin of life field is a failure – we still do not have even a plausible coherent model, let alone a validated scenario, for the emergence of life on Earth. Certainly, this is due not to a lack of experimental and theoretical effort, but to the extraordinary intrinsic difficulty and complexity of the problem. A succession of exceedingly unlikely steps is essential for the origin of life, from the synthesis and accumulation of nucleotides to the origin of translation; through the multiplication of probabilities, these make the final outcome seem almost like a miracle.' - Eugene V. Koonin, molecular biologist
It should be stressed that Dr. Koonin tries to account for the origination of the massive amount of functional information, required for the Origin of Life, by trying to access an 'unelucidated and undirected' mechanism of Quantum Mechanics called 'Many Worlds in one'(He is trying to invoke a ‘materialistic miracle’). Besides Dr. Koonin ignoring the fact that Quantum Events, on a whole, are strictly restricted to the transcendent universal laws/constants of the universe, including and especially the second law of thermodynamics, for as far back in time in the universe as we can 'observe', it is also fair to note, in criticism to Dr. Koonin’s scenario, that appealing to the undirected infinite probabilistic resource, of the quantum mechanics of the Many Worlds scenario, actually greatly increases the amount of totally chaotic information one would expect to see generated 'randomly' on the earth. In fact the Many Worlds scenario actually greatly increases the likelihood we would witness total chaos, instead of order, surrounding us as the following video points out:
Finely Tuned Big Bang, Elvis In Many Worlds, and the Schroedinger Equation – Granville Sewell – audio
Though Koonin appeals to a 'modern version' of Many Worlds, called 'Many Worlds in One' (Alex Vilenkin), 'Many Worlds' was originally devised because of the inability of materialistic scientists to find adequate causation for quantum wave collapse in the first place (i.e. that is, adequate causation that did not involve God!):
Quantum mechanics
Perhaps some may say Everett’s Many Worlds in not absurd, if so, then in some other parallel universe, where Elvis happens to now be president of the United States, they actually do think that the Many Worlds conjecture is absurd! That type of 'flexible thinking' within science I find to be completely absurd! And that one 'Elvis' example from Many Worlds is just small potatoes to the levels of absurdity that we would actually witness in reality if Many Worlds were actually true.
Though Eugene Koonin is correct to recognize that the infinite probabilistic resource postulated in ‘Many Worlds’ does not absolutely preclude the sudden appearance of massive amounts of functional information on the earth, he is very incorrect to disregard the ‘Logos’ of John 1:1 needed to correctly specify the ‘precisely controlled mechanism of implementation’ for the massive amounts of complex functional and specified information witnessed abruptly and mysteriously appearing in the first life on earth, nor for the mysterious appearing of the subsequent 'sudden' appearances of life on earth. i.e. Koonin must sufficiently account for the 'cause' for the 'effect' he wants to explain. And as I have noted previously, Stephen Meyer clearly points out that the only known cause now in operation, sufficient to explain the generation of the massive amounts of functional information we find in life, is intelligence:
Stephen C. Meyer – What is the origin of the digital information found in DNA? – August 2010 - video
Evolutionist Koonin's estimate of 1 in 10 followed by 1018 zeros, for the probability of the simplest self-replicating molecule 'randomly occurring', is a fantastically large number. The number, 10^1018, if written out in its entirety, would be a 1 with one-thousand-eighteen zeros following to the right! The universe itself is estimated to have only 1 with 80 zeros following to the right particles in it. This is clearly well beyond the 10^150 universal probability bound set by William Dembski and is thus clearly a irreducibly complex condition. Basically Koonin, in appealing to a never before observed 'materialistic miracle' from the 'Many Worlds' hypothesis, clearly illustrates that the materialistic argument essentially appears to be like this:
Conclusion: Therefore, it must arise from some unknown materialistic cause
On the other hand, Stephen Meyer describes the intelligent design argument as follows:
There remains one and only one type of cause that has shown itself able to create functional information like we find in cells, books and software programs -- intelligent design. We know this from our uniform experience and from the design filter -- a mathematically rigorous method of detecting design. Both yield the same answer. (William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, p. 90 (InterVarsity Press, 2010).)
Stephen Meyer - The Scientific Basis for the Intelligent Design Inference - video
Though purely material processes have NEVER shown the ability to produce ANY functional information whatsoever (Abel - Null Hypothesis), Darwinists are adamant that material processes produced more information, of a much higher level of integrated complexity than man can produce, than is contained in a very large library:
“Again, this is characteristic of all animal and plant cells. Each nucleus … contains a digitally coded database larger, in information content, than all 30 volumes of the Encyclopaedia Britannica put together. And this figure is for each cell, not all the cells of a body put together. … When you eat a steak, you are shredding the equivalent of more than 100 billion copies of the Encyclopaedia Britannica.”
(Dawkins R., “The Blind Watchmaker [1986], Penguin: London, 1991, reprint, pp.17-18. Emphasis in original)
When faced with the staggering impossibilities of random material processes ever generating any functional information, Evolutionists will sometimes make the claim that infinite monkeys banging away on infinite typewriters could produce the entire works of Shakespeare. Well someone humorously put that 'hypothesis' to the test:
Monkey Theory Proven Wrong:
Excerpt: A group of faculty and students in the university’s media program left a computer in the monkey enclosure at Paignton Zoo in southwest England, home to six Sulawesi crested macaques. Then, they waited. At first, said researcher Mike Phillips, “the lead male got a stone and started bashing the hell out of it. “Another thing they were interested in was in defecating and urinating all over the keyboard,” added Phillips, who runs the university’s Institute of Digital Arts and Technologies. Eventually, monkeys Elmo, Gum, Heather, Holly, Mistletoe and Rowan produced five pages of text, composed primarily of the letter S. Later, the letters A, J, L and M crept in — not quite literature.
The following is a very interesting 'origin of first self-replicating molecule' interview with one of the top chemists in America today:
On The Origin Of Life And God - Henry F. Schaefer, III PhD. - video
Further comments:
Intelligent Design or Evolution? Stuart Pullen
The chemical origin of life is the most vexing problem for naturalistic theories of life's origins. Despite an intense 50 years of research, how life can arise from non-life through naturalistic processes is as much a mystery today as it was fifty years ago, if not more.
Szostak on Abiogenesis: Just Add Water - Cornelius Hunter - Aug. 2009
Excerpt: "While Szostak and Ricardo may sound scientific with their summary of the abiogenesis research, the article is firmly planted in the non scientific evolution genre where evolution is dogmatically mandated to be a fact. Consequently, the bar is lowered dramatically as the silliest of stories pass as legitimate science."
Along these lines of 'silliest of stories' passing for rigorous science in origin of life research:
Grandma Gets Sexy Idea for Origin of Life - August 2010
Excerpt: In the video clip, she suggested that it might be possible some day to get good evidence for her ideas on the origin of life, implying that evidence has not yet been a primary concern.
SETI Ignorance Gets Stronger - December 2010
Excerpt: Steve Benner, an origin of life researcher, “used the analogy of a steel chain with a tinfoil link to illustrate that the arsenate ion said to replace phosphate in the bacterium’s DNA forms bonds that are orders of magnitude less stable.”
Pumice and the Origin of Life - October 17, 2011
Excerpt: However, the reactions required are not simple reactions, and the steps involved, even using a substrate such as pumice, are still too numerous and specific to have happened by chance. It appears highly unlikely that pumice is capable of solving the problem of the origin of life.
"In my opinion, there is no basis in known chemistry for the belief that long sequences of reactions can organize spontaneously -- and every reason to believe that they cannot. The problem of achieving sufficient specificity, whether in consisting of or occurring within a water-based system, aqueous solution, or on the surface of a mineral, is so severe that the chance of closing a cycle of reactions as complex as the reverse citric acid cycle, for example, is negligible." Leslie Orgel, 1998
By the way, there is a one million dollar 'Origin-of-Life' prize being offered:
"The Origin-of-Life Prize" ® (hereafter called "the Prize") will be awarded for proposing a highly plausible mechanism for the spontaneous rise of genetic instructions in nature sufficient to give rise to life.
To reiterate, the problem for the origin of life clearly turns out to be explaining where the information came from in the first place:
Origin of life theorist Bernd-Olaf Kuppers in his book "Information and the Origin of Life".
Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.
Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.
So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.
Of Molecules and (Straw) Men: Stephen Meyer Responds to Dennis Venema's Review of Signature in the Cell - Stephen C. Meyer October 9, 2011
Excerpt of Conclusion: The origin-of-life scenarios that Venema cites as alternatives to intelligent design lack biochemical plausibility and do not account for the ultimate origin of biological information.
"Monkeys Typing Shakespeare" Simulation Illustrates Combinatorial Inflation Problem - October 2011
Excerpt: In other words, Darwinian evolution isn't going to be able to produce fundamentally new protein folds. In fact, it probably wouldn't even be able to produce a single 9-character string of nucleotides in DNA, if that string would not be retained by selection until all 9 nucleotides were in place.
Natural selection cannot explain the origin of life
Paul Davies
“The existence of a genome and the genetic code divides the living organisms from nonliving matter. There is nothing in the physico-chemical world that remotely resembles reactions being determined by a sequence and codes between sequences.”,,,"The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect. There is no trace of messages determining the results of chemical reactions in inanimate matter. If genetical processes were just complicated biochemistry, the laws of mass action and thermodynamics would govern the placement of amino acids in the protein sequences.”
Hubert P. Yockey: Information Theory, Evolution, and the Origin of Life, page 2 and 5
H.P. Yockey also notes in the Journal of Theoretical Biology:
Norbert Weiner - MIT Mathematician - Father of Cybernetics
Programming of Life - October 2010
Excerpt: "Evolutionary biologists have failed to realize that they work with two more or less incommensurable domains: that of information and that of matter... These two domains will never be brought together in any kind of the sense usually implied by the term ‘reductionism.'... Information doesn't have mass or charge or length in millimeters. Likewise, matter doesn't have bytes... This dearth of shared descriptors makes matter and information two separate domains of existence, which have to be discussed separately, in their own terms."
George Williams - Evolutionary Biologist
The simplest, non-parasitic, bacteria ever found on earth is constructed with over a million individual protein molecules divided into hundreds of different protein types. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. (there are hundreds of amino acids found in nature, but only 20 are commonly used in life). These one dimensional sequences of amino acids fold into highly complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Titin, which helps in the contraction of striated muscle tissues, consists of 34,350 amino acids, and is the largest known protein. Some proteins are now shown to be absolutely irreplaceable in their specific biological/chemical reactions for the first cell:
Without enzyme, biological reaction essential to life takes 2.3 billion years: UNC study:
"Phosphatase speeds up reactions vital for cell signalling by 10^21 times. Allows essential reactions to take place in a hundreth of a second; without it, it would take a trillion years!" Jonathan Sarfati
Programming of Life - Proteins & Enzymes - video
Book Review: Creating Life in the Lab: How New discoveries in Synthetic Biology Make a Case for the Creator - Rich Deem - January 2011
Excerpt: Despite all this "intelligent design," the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, "is it reasonable to think that undirected evolutionary processes routinely accomplished this task?"
Research group develops more efficient artificial enzyme - November 2011
When they try to heat solutions to get around these prohibitive reaction times they run into the competing problem of product stability:
Is the Origin of Life in Hot Water? - December 2010
Excerpt: Heating a reaction does nothing for product stability. Cooling a reaction makes the reaction rate problems worse.
To reiterate what was quoted before, amino acids don't even have a tendency to chemically bond with each other, despite over fifty years of experimentation trying to get the amino acids to bond naturally. The odds of just one protein of 150 amino acids, overcoming the barriers presented by chemical bonding and forming a functional protein spontaneously, have been calculated at less than 1 in 10^164 (Meyer, Signature In The Cell). On top of the fact that nature cannot 'naturally' produce proteins, the limit to man's ability to 'intelligently' form a single synthetic amino acid chain (a protein), using all his intelligence and lab equipment, is currently severely constrained to about 70-100 amino acids:
Peptide synthesis
"typically peptides and proteins in the range of 70~100 amino acids are pushing the limits of synthetic accessibility. Synthetic difficulty also is sequence dependent; typically amyloid peptides and proteins are difficult to make."(To make larger proteins requires “non-natural” peptide bonds - (Chemical Synthesis Of Proteins - 2005))
On top of that, Doug Axe has shown that only 1 in 10^77 of any proteins 'randomly' formed would perform any beneficial biological function. The rest of the sequences would be totally useless for any meaningful function in the cell. Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place.
Doug Axe Knows His Work Better Than Steve Matheson
Excerpt: Regardless of how the trials are performed, the answer ends up being at least half of the total number of password possibilities, which is the staggering figure of 10^77 (written out as 100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000). Armed with this calculation, you should be very confident in your skepticism, because a 1 in 10^77 chance of success is, for all practical purposes, no chance of success. My experimentally based estimate of the rarity of functional proteins produced that same figure, making these likewise apparently beyond the reach of chance.
Evolution vs. Functional Proteins - Doug Axe - Video
Dennis Venema, a theistic evolutionist, had tried to challenge Doug Axe's work on the extreme rarity of functional proteins, and here is what one of the very papers said, that Venema tried to use to supposedly refute Axe:
Responding to Venema - Casey Luskin - October 2011
Excerpt: However, these experiments do not really model the evolution that occurs through gradual, step-by-step changes, with all intermediates being fully foldable proteins (Blanco et al., 1999). To create such an evolutionarily relevant path from all-a to all-b domains would be the next challenge for protein designers.
Axe Diagram for finding a functional protein domain out of all sequence space:
The y-axis can be seen as representing enzyme activity, and the x-axis represents all possible amino acid sequences. Enzymes sit at the peak of their fitness landscapes (Point A). There are extremely high levels of complex and specified information in proteins--informational sequences which point to intelligent design.
And how do Darwinists deal with the astronomical improbabilities that are stacked against them for explaining the novel origination of even just one required protein by 'natural' means? At least as far as the public is concerned??? Well by deception of course!
Back to School Part VIII
Excerpt: Amazingly evolutionists think hemoglobin’s special amino acid sequence encoding for this machine is no different than any random list, such (as) a list of birthdays. To be sensible Johnson’s and Losos’ analogy would need the list of birthdays to provide something fantastic, such as the answers to the biology class final exam.
Proteins Did Not Evolve Even According to the Evolutionist’s Own Calculations but so What, Evolution is a Fact - Cornelius Hunter - July 2011
Excerpt: For instance, in one case evolutionists concluded that the number of evolutionary experiments required to evolve their protein (actually it was to evolve only part of a protein and only part of its function) is 10^70 (a one with 70 zeros following it). Yet elsewhere evolutionists computed that the maximum number of evolutionary experiments possible is only 10^43. Even here, giving the evolutionists every advantage, evolution falls short by 27 orders of magnitude.
Kirk Durston has done work on defining how much functional information resides in proteins:
Does God Exist? - Argument From Molecular Biology - Proteins - Kirk Durston - short video
Intelligent Design - Kirk Durston - Lecture video
Measuring the functional sequence complexity of proteins - 2007: Kirk K Durston, David KY Chiu, David L Abel, Jack T Trevors
In this paper, we provide a method to measure functional sequence complexity (in proteins).
Conclusion: This method successfully distinguishes between order, randomness, and biological function (for proteins).
Intelligent Design: Required by Biological Life? K.D. Kalinsky - Pg. 10 - 11
Case Three: an average 300 amino acid protein:
Excerpt: It is reasonable, therefore, to estimate the functional information required for the average 300 amino acid protein to be around 700 bits of information. I(Ex) > Inat and ID (Intelligent Design) is 10^155 times more probable than mindless natural processes to produce the average protein.
"a very rough but conservative result is that if all the sequences that define a particular (protein) structure or fold-set where gathered into an area 1 square meter in area, the next island would be tens of millions of light years away."
Kirk Durston
Axe's work substantiates, and extends, previous work that was done at Massachusetts Institute of Technology (MIT):
Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other: - Michael Behe
"From actual experimental results it can easily be calculated that the odds of finding a folded protein are about 1 in 10 to the 65 power (Sauer, MIT).,,, The odds of finding a marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure. Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.”
Michael J. Behe, The Weekly Standard, June 7, 1999
Even the low end estimate, for functional proteins given by evolutionists (1 in 10^12), is very rare:
Fancy footwork in the sequence space shuffle - 2006
"Estimates for the density of functional proteins in sequence space range anywhere from 1 in 10^12 to 1 in 10^77. No matter how you slice it, proteins are rare. Useful ones are even more rare."
It is interesting to note that the 1 in 10^12 (trillion) estimate for functional proteins (Szostak), though still very rare and of insurmountable difficulty for a materialist to use in any evolutionary scenario, was arrived at by an in-vitro (out of living organism) binding of ANY random proteins to the 'universal' ATP energy molecule.
How Proteins Evolved - Cornelius Hunter - December 2010
The entire episode of Szostak’s failed attempt to establish the legitimacy of the 1 in 10^12 functional protein number from a randomly generated library of proteins can be read here::
This following paper was the paper that put the final nail in the coffin for Szostak's work:
Here is a very interesting comment by Jack Szostak himself:
The Origin of Life on Earth
Dr. Jack Szostak - Nobel Laureate and leading Origin of Life researcher who, despite the evidence he sees first hand, still believes 'life' simply 'emerged' from molecules
Further defence of Dr. Axe's work on the rarity of proteins:
Axe (2004) And The Evolution Of Protein Folds - March 2011
On top of the fact that Origin of Life researcher Jack Szostak, and others, failed to generate any biologically relevant proteins, from a library of trillions of randomly generated proteins, proteins have now been shown to have a ‘Cruise Control’ mechanism, which works to ‘self-correct’ the integrity of the protein structure from any random mutations imposed on them.
Proteins with cruise control provide new perspective:
Cruise Control permeating the whole of the protein structure??? This is an absolutely fascinating discovery. The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent to me that highly advanced mathematical information must reside 'transcendentally' along the entirety of the protein structure, in order to achieve such control of the overall protein structure. This fact gives us clear evidence that there is far more functional information residing in proteins than meets the eye. Moreover this ‘oneness’ of cruise control, within the protein structure, can only be achieved through quantum computation/entanglement principles, and is inexplicable to the reductive materialistic approach of neo-Darwinism! For a sample of the equations that must be dealt with, to 'engineer' even a simple process control loop like cruise control for a single protein, please see this following site:
PID controller
It is in realizing the staggering level of engineering that must be dealt with to achieve ‘cruise control’ for each individual protein, along the entirety of the protein structure, that it becomes apparent even Axe’s 1 in 10^77 estimate for rarity of finding specific functional proteins within sequence space is far, far too generous. In fact probabilities over various ‘specific’ configurations of material particles simply do not even apply, at all, since the 'cause' of the non-local quantum information does not reside within the material particles in the first place (i.e. falsification of local realism; Alain Aspect). Here is corroborating evidence that 'protein specific' quantum information/entanglement resides in functional proteins:
Quantum states in proteins and protein assemblies:
In fact since quantum entanglement falsified reductive materialism/local realism (Alain Aspect) then finding quantum entanglement/information to be ‘protein specific’ is absolutely shattering to any hope that materialists had in whatever slim probabilities there were, since a ‘transcendent’ cause must be supplied which is specific to each unique protein structure. Materialism is simply at a complete loss to supply such a 'non-local' transcendent cause, whereas Theism has always postulated a transcendent cause for life!
Though the authors of the 'cruise control' paper tried to put a evolution friendly spin on the 'cruise control' evidence, for finding a highly advanced 'Process Control Loop' at such a base molecular level, before natural selection even has a chance to select for any morphological novelty of a protein, this limit to variability is very much to be expected as a Intelligent Design/Genetic Entropy feature, and is in fact a very constraining thing to the amount of variation we should reasonably expect from any 'kind' of species in the first place.
Here are some more articles highlighting the extreme rarity of functional proteins:
Minimal Complexity Relegates Life Origin Models To Fanciful Speculation - Nov. 2009
Excerpt: Based on the structural requirements of enzyme activity Axe emphatically argued against a global-ascent model of the function landscape in which incremental improvements of an arbitrary starting sequence "lead to a globally optimal final sequence with reasonably high probability". For a protein made from scratch in a prebiotic soup, the odds of finding such globally optimal solutions are infinitesimally small- somewhere between 1 in 10exp140 and 1 in 10exp164 for a 150 amino acid long sequence if we factor in the probabilities of forming peptide bonds and of incorporating only left handed amino acids.
The Case Against a Darwinian Origin of Protein Folds - Douglas Axe - 2010
Excerpt Pg. 11: "Based on analysis of the genomes of 447 bacterial species, the projected number of different domain structures per species averages 991. Comparing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli, provides a rough figure of three or four new domain folds being needed, on average, for every new metabolic pathway. In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10^159 to one in 10^308 possibilities, something the neo-Darwinian model falls short of by a very wide margin."
The Case Against a Darwinian Origin of Protein Folds - Douglas Axe, Jay Richards - audio
The following site offers a short summary of the 'Darwinian shortcuts' that failed to overcome Axe's finding for the rarity of protein folds:
Shortcuts to new protein folds - October 2010
Here are articles that clearly illustrate that the protein evidence, no matter how crushing to Darwinism, is always crammed into the Darwinian framework by Evolutionists:
The Hierarchy of Evolutionary Apologetics: Protein Evolution Case Study - Cornelius Hunter - January 2011
Here is a critique of the failed attempt to evolve a 'fit' protein to replace a protein in a virus which had a gene knocked out:
New Genes: Putting the Theory Before the Evidence - January 2011
Excerpt: What they discovered was that the evolutionary process could produce only tiny improvements to the virus’ ability to infect a host. Their evolved sequences showed no similarity to the native sequence which is supposed to have evolved. And the best virus they could produce, even with the vast majority of the virus already intact, was several orders of magnitude weaker than nature’s virus.
The theory, even by the evolutionist’s own reckoning, is unworkable. Evolution fails by a degree that is incomparable in science. Scientific theories often go wrong, but not by 27 orders of magnitude. And that is conservative.
Here is a fairly good defense of the rarity of protein folds, from a blogger called gpuccio, from the best Darwinian objections that could be mustered against it:
Signature In The Cell - Review
Our most advanced supercomputers pale in comparison to this assumption, of a universe full of chemical laboratories, that has been generously granted to evolutionists for 'randomly' finding a functional protein in sequence space:
"SimCell," anyone?
"Unfortunately, Schulten's team won't be able to observe virtual protein synthesis in action. Even the fastest supercomputers can only depict such atomic complexity for a few dozen nanoseconds." - cool cellular animation videos on the site
Instead of us just looking at the probability of finding a single 'simple' functional protein molecule by chance, (a solar system full of blind men solving the Rubik’s Cube simultaneously (Hoyle), let’s also look at the complexity which goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is indeed the handi-work of an infinitely powerful Creator.
Francis Collins on Making Life
Excerpt: 'We are so woefully ignorant about how biology really works. We still don't understand how a particular DNA sequence—when we just stare at it—codes for a protein that has a particular function. We can't even figure out how that protein would fold—into what kind of three-dimensional shape. And I would defy anybody who is going to tell me that they could, from first principles, predict not only the shape of the protein but also what it does.' - Francis Collins - Former Director of the Human Genome Project
Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator - Fazale Rana
Excerpt of Review: ‘Another interesting section of Creating Life in the Lab is one on artificial enzymes. Biological enzymes catalyze chemical reactions, often increasing the spontaneous reaction rate by a billion times or more. Scientists have set out to produce artificial enzymes that catalyze chemical reactions not used in biological organisms. Comparing the structure of biological enzymes, scientists used super-computers to calculate the sequences of amino acids in their enzymes that might catalyze the reaction they were interested in. After testing dozens of candidates,, the best ones were chosen and subjected to “in vitro evolution,” which increased the reaction rate up to 200-fold. Despite all this “intelligent design,” the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, “is it reasonable to think that undirected evolutionary processes routinely
accomplished this task?”
In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, which was 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it was estimated to take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape.
"Blue Gene's final product, due in four or five years, will be able to "fold" a protein made of 300 amino acids, but that job will take an entire year of full-time computing." Paul Horn, senior vice president of IBM research, September 21, 2000
Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single protein molecule:
A Few Hundred Thousand Computers vs. A Single Protein Molecule - video
Interestingly, there are some (perhaps many?) complex protein folding problems found by scientists that have still refused to be solved by the brute number crunching power of super-computers, but, 'surprisingly', these problems have been solved by the addition of 'human intuition';
So Much For Random Searches - PaV - September 2011
Excerpt: There’s an article in Discover Magazine about how gamers have been able to solve a problem in HIV research in only three weeks (!) that had remained outside of researcher’s powerful computer tools for years. This, until now, unsolvable problem gets solved because: "They used a wide range of strategies, they could pick the best places to begin, and they were better at long-term planning. Human intuition trumped mechanical number-crunching." Here’s what intelligent agents were able to do within the search space of possible solutions:,,, "until now, scientists have only been able to discern the structure of the two halves together. They have spent more than ten years trying to solve structure of a single isolated half, without any success. The Foldit players had no such problems. They came up with several answers, one of which was almost close to perfect. In a few days, Khatib had refined their solution to deduce the protein’s final structure, and he has already spotted features that could make attractive targets for new drugs." Thus,,
Random search by powerful computer: 10 years and No Success
Intelligent Agents guiding powerful computing: 3 weeks and Success.
As well, despite some very optimistic claims, it seems future 'quantum computers' will not fair much better in finding functional proteins in sequence space than even a idealized 'material' supercomputer of today can do:
The Limits of Quantum Computers – March 2008
Excerpt: "Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle"
The Limits of Quantum Computers - Scott Aaronson - 2007
Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.,,, Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”,,,
Here is Scott Aaronson's blog in which refutes recent claims that P=NP (Of note: if P were found to equal NP, then a million dollar prize would be awarded to the mathematician who provided the proof that NP problems could be solved in polynomial time):
Excerpt: Quantum computers are not known to be able to solve NP-complete problems in polynomial time.
Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time:
Combinatorial Algorithms for Protein Folding in Lattice
Models: A Survey of Mathematical Results – 2009
Excerpt: Protein Folding: Computational Complexity
NP-completeness: from 10^300 to 2 Amino Acid Types
NP-completeness: Protein Folding in Ad-Hoc Models
NP-completeness: Protein Folding in the HP-Model
Another factor severely complicating man's ability to properly mimic protein folding is that, much contrary to evolutionary thought, many proteins fold differently in different 'molecular' situations:
The Gene Myth, Part II - August 2010
As a sidelight to the complexity found for folding any relatively short amino acid sequence into a 3-D protein, the complexity of computing the actions of even a simple atom, in detail, quickly exceeds the capacity of our most advanced supercomputers of today:
Delayed time zero in photoemission: New record in time measurement accuracy - June 2010
Excerpt: Although they could confirm the effect qualitatively using complicated computations, they came up with a time offset of only five attoseconds. The cause of this discrepancy may lie in the complexity of the neon atom, which consists, in addition to the nucleus, of ten electrons. "The computational effort required to model such a many-electron system exceeds the computational capacity of today's supercomputers," explains Yakovlev.
Also of interest to the extreme difficultly man has in computing the folding of a protein within any reasonable amount of time, it seems water itself, (H2O), was 'designed' with protein folding in mind:
Protein Folding: One Picture Per Millisecond Illuminates The Process - 2008
Water Is 'Designer Fluid' That Helps Proteins Change Shape - 2008
Excerpt: "When bound to proteins, water molecules participate in a carefully choreographed ballet that permits the proteins to fold into their functional, native states. This delicate dance is essential to life."
There are overlapping 'chaperone' systems insuring that proteins fold into the precisely correct shape:
Proteins Fold Who Knows How - July 2010
Excerpt: New work published in Cell shows that this “chaperone” device speeds up the proper folding of the polypeptide when it otherwise might get stuck on a “kinetic trap.” A German team likened the assistance to narrowing the entropic funnel. “The capacity to rescue proteins from such folding traps may explain the uniquely essential role of chaperonin cages within the cellular chaperone network,” they said. GroEL+GroES therefore “rescues” protein that otherwise might misfold and cause damage to the cell.,,, “In contrast to all other components of this chaperone network, the chaperonin, GroEL, and its cofactor, GroES, are uniquely essential, forming a specialized nano-compartment for single protein molecules to fold in isolation.”
Nature Review Article Yields Unpleasant Data For Darwinism - August 2011
Excerpt: The number of possible shapes that a protein can fold into is very high and folding reactions are very complex, involving the co-operation of many weak, non-covalent interactions. A high percentage of proteins do not fold automatically into the required shape and are at risk of aberrant folding and aggregation. As the abstract to this paper states: “To avoid these dangers, cells invest in a complex network of molecular chaperones, which are ingenious mechanisms to prevent aggregation and promote efficient folding.”
In real life, the protein folds into its final shape in a fraction of a second! The Blue Gene computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. This is the complexity found for folding JUST ONE relatively short 'simple' existing protein molecule. Yet, evolution must account for the origination, and organization, of far, far, more than just one relatively short specifically sequenced protein molecule:
A New Guide to Exploring the Protein Universe
"It is estimated, based on the total number of known life forms on Earth, that there are some 50 billion different types of proteins in existence today, and it is possible that the protein universe could hold many trillions more."
Lynn Yarris - 2005
Shoot no one really has a firm clue as to exactly how many different proteins reside in a single cell much less all of life;
Go to the Cell, Thou Sluggard - March 2011
Excerpt: Calculations indicate that each human cell contains roughly a billion protein molecules.,,, These proteins have a kind of address label, a signal sequence, that specifies what place inside or outside the cell they need to be transported to. This transport must function flawlessly if order is to be maintained in the cell,
Even the most generous of protein classifications, 'folds and superfamilies' yields several thousand completely unique proteins:
SCOP (Structural Classification of Proteins) site - gpuccio
Excerpt: However we group the proteome, we have at present at least 1000 different fundamental folds, 2000 “a little less fundamentally different” folds (the superfamilies), and 6000 totally unrelated groups of primary sequences.
What makes matters much worse for the materialist is that he will try to assert that existing functional proteins of one structure can easily mutate into other functional proteins, of a completely different structure or function, by pure chance. Yet once again the empirical evidence betrays the materialist. The proteins that are found in life are shown to be highly constrained in their ability to evolve into other proteins:
Following the Evidence Where It Leads: Observations on Dembski's Exchange with Shapiro - Ann Gauger - January 2012
Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe - Oct 2009
Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,,
Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009
Wheel of Fortune: New Work by Thornton's Group Supports Time-Asymmetric Dollo's Law - Michael Behe - October 5, 2011
Stability effects of mutations and protein evolvability. October 2009
The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway - Ann K. Gauger and Douglas D. Axe - April 2011
Excerpt: We infer from the mutants examined that successful functional conversion would in this case require seven or more nucleotide substitutions. But evolutionary innovations requiring that many changes would be extraordinarily rare, becoming probable only on timescales much longer than the age of life on earth.
Corticosteroid Receptors in Vertebrates: Luck or Design? - Ann Gauger - October 11, 2011
Excerpt: if merely changing binding preferences is hard, even when you start with the right ancestral form, then converting an enzyme to a new function is completely beyond the reach of unguided evolution, no matter where you start.
“Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed - along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)
"A problem with the evolution of proteins having new shapes is that proteins are highly constrained, and producing a functional protein from a functional protein having a significantly different shape would typically require many mutations of the gene producing the protein. All the proteins produced during this transition would not be functional, that is, they would not be beneficial to the organism, or possibly they would still have their original function but not confer any advantage to the organism. It turns out that this scenario has severe mathematical problems that call the theory of evolution into question. Unless these problems can be overcome, the theory of evolution is in trouble."
Problems in Protein Evolution:
Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors - Doug Axe
Darwin's God: Post Synaptic Proteins Intolerant of Change - December 2010
As well, the 'errors/mutations' that are found to 'naturally' occur in protein sequences are found to be 'designed errors':
Cells Defend Themselves from Viruses, Bacteria With Armor of Protein Errors - Nov. 2009
There are even 'protein police':
GATA-1: A Protein That Regulates Proteins - Feb. 2010
Heat shock proteins:
Excerpt: They play an important role in protein-protein interactions such as folding and assisting in the establishment of proper protein conformation (shape) and prevention of unwanted protein aggregation.
This following paper, and audio interview, shows that there is a severe 'fitness cost' for cells to carry 'transitional' proteins that have not achieved full functionality yet:
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness - May 2010
Excerpt: Despite the theoretical existence of this short adaptive path to high fitness, multiple independent lines grown in tryptophan-limiting liquid culture failed to take it. Instead, cells consistently acquired mutations that reduced expression of the double-mutant trpA gene. Our results show that competition between reductive and constructive paths may significantly decrease the likelihood that a particular constructive path will be taken.
Testing Evolution in the Lab With Biologic Institute's Ann Gauger - audio
In fact the Ribosome, which makes the myriad of different, yet specific, types of proteins found in life, is found to be severely intolerant to any random mutations occurring to proteins.
The Ribosome: Perfectionist Protein-maker Trashes Errors
Excerpt: The enzyme machine that translates a cell's DNA code into the proteins of life is nothing if not an editorial perfectionist...the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products... To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is "shocking" and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis.
And exactly how is the evolution new life forms suppose to 'randomly' occur if it is prevented from 'randomly' occurring to the proteins in the first place?
As well, the 'protein factory' of the ribosome, which is the only known machine in the universe capable of making proteins of any significant length, is far more complicated than first thought:
Honors to Researchers Who Probed Atomic Structure of Ribosomes - Robert F. Service
Excerpt: "The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.”
Moreover, scientists are finding many protein complexes are extremely intolerant to any random mutations:
Warning: Do NOT Mutate This Protein Complex: - June 2009
Excerpt: In each cell of your body there is a complex of 8 or more proteins bound together called the BBSome. This protein complex, discovered in 2007, should not be disturbed. Here’s what happens when it mutates: “A homozygous mutation in any BBSome subunit (except BBIP10) will make you blind, obese and deaf, will obliterate your sense of smell, will make you grow extra digits and toes and cause your kidneys to fail.”... the BBSome is “highly conserved” (i.e., unevolved) in all ciliated organisms from single-celled green algae to humans,..."
Which begs the question, "If this complex of 8 proteins which is found throughout life, is severely intolerant to any mutations happening to it now, how in the world did it come to be in the first photosynthetic life in the first place?
Even if evolution somehow managed to overcome these impossible hurdles for generating novel proteins by totally natural means, evolution would still face the monumental hurdles of generating complimentary protein/protein binding sites, in which the novel proteins would actually interact with each other in order to accomplish the specific tasks needed in a cell (it is estimated that there are least 10,000 different types of protein-protein binding sites in a 'simple' cell; Behe: Edge Of Evolution).
What does the recent hard evidence say about novel protein-protein binding site generation?
Protein-Protein Interactions (PPI) Fine-Tune the Case for Intelligent Design - Article with video - April 2011
Excerpt: The most recent work by the Harvard scientists indicates that the concentration of PPI-participating proteins in the cell is also carefully designed.
"The likelihood of developing two binding sites in a protein complex would be the square of the probability of developing one: a double CCC (chloroquine complexity cluster), 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the entire world in the past 4 billion years, so the odds are against a single event of this variety (just 2 binding sites being generated by accident) in the history of life. It is biologically unreasonable."
Michael J. Behe PhD. (from page 146 of his book "Edge of Evolution")
The Sheer Lack Of Evidence For Macro Evolution - William Lane Craig - video
Nature Paper,, Finds Darwinian Processes Lacking - Michael Behe - Oct. 2009
Excerpt: Now, thanks to the work of Bridgham et al (2009), even such apparently minor switches in structure and function (of a protein to its supposed ancestral form) are shown to be quite problematic. It seems Darwinian processes can’t manage to do even as much as I had thought. (which was 1 in 10^40 for just 2 binding sites)
So, how many protein-protein binding sites are found in life?
Dr. Behe, on the important Table 7.1 on page 143 of Edge Of Evolution, finds that a typical cell might have some 10,000 protein-binding sites. Whereas a conservative estimate for protein-protein binding sites in a multicellular creature is,,,
Largest-Ever Map of Plant Protein Interactions - July 2011
So taking into account that they only covered 2%, of the full protein-protein "interactome", then that gives us a number, for different protein-protein interactions, of 310,000. Thus, from my very rough 'back of the envelope' calculations, we find that this is at least 30 times higher than Dr. Behe's estimate of 10,000 different protein-protein binding sites for a typical single cell (Page 143; Edge of Evolution; Behe). Therefore, at least at first glance from my rough calculations, it certainly seems to be a gargantuan step that evolution must somehow make, by purely unguided processes, to go from a single cell to a multi-cellular creature. To illustrate just how difficult of a step it is, the order of difficulty, of developing a single protein-protein binding site, is put at 10^20 replications of the malarial parasite by Dr. Behe. This number comes from direct empirical observation.
Dr. Behe's empirical research agrees with what is found if scientists try to purposely design a protein-protein binding site:
Viral-Binding Protein Design Makes the Case for Intelligent Design Sick! (as in cool) - Fazale Rana - June 2011
Moreover, there is, 'surprisingly', found to be 'rather low' conservation of Domain-Domain Interactions occurring in Protein-Protein interactions:
A Top-Down Approach to Infer and Compare Domain-Domain Interactions across Eight Model Organisms
Excerpt: Knowledge of specific domain-domain interactions (DDIs) is essential to understand the functional significance of protein interaction networks. Despite the availability of an enormous amount of data on protein-protein interactions (PPIs), very little is known about specific DDIs occurring in them.,,, Our results show that only 23% of these DDIs are conserved in at least two species and only 3.8% in at least 4 species, indicating a rather low conservation across species.,,,
As well, RNA, which codes for the proteins at the ribosome, is found to be intolerant to 'random mutations':
Molecular Typesetting: How Errors Are Corrected (In RNA) While Proteins Are Being Built
Excerpt: Ensuring that proteins are built correctly is essential to the proper functioning of our bodies,,,“Scientists have been puzzled as to how this process makes so few mistakes.,,,“In fact, there is more than one identified mechanism for ensuring that genetic code is copied correctly."
The cell has elaborate ways to safeguard its genetic library by repairing DNA, but now scientists are finding the same enzymes can also repair RNA. RNA methylation damage can be repaired by the same AlkB enzyme that repairs DNA. This is surprising because RNA and proteins were considered more expendable than DNA. (Creation-Evolution Headlines - Feb. 2003)
RNA: Protein Regulators Are Themselves Regulated
“What was formerly conceived of as a direct, straightforward pathway is gradually turning out to be a dense network of regulatory mechanisms: genes are not simply translated into proteins via mRNA (messenger RNA). MicroRNAs control the translation of mRNAs (messenger RNAs) into proteins, and proteins in turn regulate the microRNAs at various levels.”
Researchers Uncover New Kink In Gene Control: - Oct. 2009
Excerpt: a collaborative effort,, has uncovered more than 300 proteins that appear to control genes, a newly discovered function for all of these proteins previously known to play other roles in cells.,,,The team suspects that many more proteins encoded by the human genome might also be moonlighting to control genes,,,
On top of these monumental problems, for just finding any one specific functional protein, or for just finding any protein/protein binding sites, or for accounting for multiple layers of error correction that prevent evolution from happening to proteins in the first place, a materialist must still account for how the DNA code came about in any origin of life scenario he puts forth. These following videos and articles highlight the 'DNA problem':
Programming of Life - DNA - video
A New Design Argument - Charles Thaxton
Excerpt: "There is an identity of structure between DNA (and protein) and written linguistic messages. Since we know by experience that intelligence produces written messages, and no other cause is known, the implication, according to the abductive method, is that intelligent cause produced DNA and protein. The significance of this result lies in the security of it, for it is much stronger than if the structures were merely similar. We are not dealing with anything like a superficial resemblance between DNA and a written text. We are not saying DNA is like a message. Rather, DNA is a message. True design thus returns to biology."
Information Theory, Evolution, and the Origin of Life - Hubert P. Yockey, 2005
The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video
Codes and Axioms are always the result of mental intention, not material processes
A.E. Wilder Smith, DNA, Cactus, and Von Neumann Machines - John MacArthur - audio
Information - The Utter Demise Of Darwinian Evolution - video
"A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107."
(The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)
The Digital Code of DNA - 2003 - Leroy Hood & David Galas
Excerpt: The discovery of the structure of DNA transformed biology profoundly, catalysing the sequencing of the human genome and engendering a new view of biology as an information science.
The Digital Code of DNA and the Unimagined Complexity of a ‘Simple’ Bacteria – Rabbi Moshe Averick – video (Notes in Description)
Upright Biped Replies to Dr. Moran on “Information” - December 2011
Excerpt: 'a fair reading suggests that the information transfer in the genome shouldn’t be expected to adhere to the qualities of other forms of information transfer. But as it turns out, it faithfully follows the same physical dynamics as any other form of recorded information.'
Even the leading "New Atheist" in the world, Richard Dawkins, agrees that DNA functions exactly like digital code:
Richard Dawkins Opens Mouth; Inserts Foot - video
i.e. DNA functions exactly as a 'devised code':
Biophysicist Hubert Yockey determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the optimal universal genetic code that is found in nature. The maximum amount of time available for it to originate is 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that is optimal. Put simply, natural selection lacks the time necessary to find the optimal universal genetic code we find in nature. (Fazale Rana, -The Cell's Design - 2008 - page 177)
Ode to the Code - Brian Hayes
Evolutionists have long argued that the genetic code is universal for all lifeforms, and maintain that that fact is strong evidence for evolution from a universal common anscestor, yet it appears they were wrong once again:
No Darwin Tree of Life (Craig Venter vs. Richard Dawkins)- video
Venter vs. Dawkins on the Tree of Life - and Another Dawkins Whopper - March 2011
Excerpt:,,, But first, let's look at the reason Dawkins gives for why the code must be universal:
"The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation...this would spell disaster." (2009, p. 409-10)
OK. Keep Dawkins' claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 23 variants of the genetic code).
Simple counting question: does "one or two" equal 23? That's the number of known variant genetic codes compiled by the National Center for Biotechnology Information. By any measure, Dawkins is off by an order of magnitude, times a factor of two.
As well there was a ‘optimality’ found for the 20 amino acid set used in the 'standard' Genetic code when the set was compared to 1 million randomly generated alternative amino acid sets;
Does Life Use a Non-Random Set of Amino Acids? - Jonathan M. - April 2011
Excerpt: The authors compared the coverage of the standard alphabet of 20 amino acids for size, charge, and hydrophobicity with equivalent values calculated for a sample of 1 million alternative sets (each also comprising 20 members) drawn randomly from the pool of 50 plausible prebiotic candidates. The results? The authors noted that: "…the standard alphabet exhibits better coverage (i.e., greater breadth and greater evenness) than any random set for each of size, charge, and hydrophobicity, and for all combinations thereof."
Extreme genetic code optimality from a molecular dynamics calculation of amino acid polar requirement – 2009
Excerpt: A molecular dynamics calculation of the amino acid polar requirement is used to score the canonical genetic code. Monte Carlo simulation shows that this computational polar requirement has been optimized by the canonical genetic code, an order of magnitude more than any previously known measure, effectively ruling out a vertical evolution dynamics.
The Finely Tuned Genetic Code - Jonathan M. - November 2011
Excerpt: Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: "why is the genetic code the way it is and how did it come to be?," that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology. - Eugene Koonin and Artem Novozhilov
Moreover the first DNA code of life on earth had to be at least as complex as the current DNA code found in life:
Shannon Information - Channel Capacity - Perry Marshall - video
“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible”
Donald E. Johnson – Bioinformatics: The Information in Life
Deciphering Design in the Genetic Code - Fazale Rana
Excerpt: When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code's capacity occurred outside the distribution. Researchers estimate the existence of 10 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This finding means that of the 10 possible genetic codes, few, if any, have an error-minimization capacity that approaches the code found universally in nature.
“The genetic code’s error-minimization properties are far more dramatic than these (one in a million) results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 codes few, if any have an error-minimization capacity that approaches the code found universally throughout nature.”
Fazale Rana - From page 175; 'The Cell’s Design'
Here is a comment on a study of a 'putative primitive' amino acid set;
DNA - The Genetic Code - Optimal Error Minimization & Parallel Codes - Dr. Fazale Rana - video
Excerpt: It appears then, that the genetic code has been put together in view of minimizing not just the occurence of amino acid substitution mutations, but also the detrimental effects that would result when amino acid substitution mutations do occur.
Though the DNA code is found to be optimal from a error minimization standpoint, it is also now found that the fidelity of the genetic code, of how a specific amino acid is spelled, is far greater than had at first been thought:
Synonymous Codons: Another Gene Expression Regulation Mechanism - September 2010
Excerpt: There are 64 possible triplet codons in the DNA code, but only 20 amino acids they produce. As one can see, some amino acids can be coded by up to six “synonyms” of triplet codons: e.g., the codes AGA, AGG, CGA, CGC, CGG, and CGU will all yield arginine when translated by the ribosome. If the same amino acid results, what difference could the synonymous codons make? The researchers found that alternate spellings might affect the timing of translation in the ribosome tunnel, and slight delays could influence how the polypeptide begins its folding. This, in turn, might affect what chemical tags get put onto the polypeptide in the post-translational process. In the case of actin, the protein that forms transport highways for muscle and other things, the researchers found that synonymous codons produced very different functional roles for the “isoform” proteins that resulted in non-muscle cells,,, In their conclusion, they repeated, “Whatever the exact mechanism, the discovery of Zhang et al. that synonymous codon changes can so profoundly change the role of a protein adds a new level of complexity to how we interpret the genetic code.”,,,
Werner Gitt - In The Beginning Was Information - p. 95
Collective evolution and the genetic code - 2006:
Excerpt: The genetic code could well be optimized to a greater extent than anything else in biology and yet is generally regarded as the biological element least capable of evolving.
Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes.... the present findings support the view that protein-coding regions can carry abundant parallel codes.
The data compression of some stretches of human DNA is estimated to be up to 12 codes thick (12 different ways of DNA transcription) (Trifonov, 1989). (This is well beyond the complexity of any computer code ever written by man). John Sanford - Genetic Entropy
The multiple codes of nucleotide sequences. Trifonov EN. - 1989
Excerpt: Nucleotide sequences carry genetic information of many different kinds, not just instructions for protein synthesis (triplet code).
"In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10].
Donald E. Johnson – Programming of Life – pg.51 - 2010
DNA Caught Rock 'N Rollin': On Rare Occasions DNA Dances Itself Into a Different Shape - January 2011
Ends and Means: More on Meyer and Nelson in BIO-Complexity - September 2011
Excerpt: According to Garrett and Grisham's Biochemistry, the aminoacyl tRNA snythetase is a "second genetic code" because it must discriminate among each of the twenty amino acids and then call out the proper tRNA for that amino acid: "Although the primary genetic code is key to understanding the central dogma of molecular biology on how DNA encodes proteins, the second genetic code is just as crucial to the fidelity of information transfer."
Histone Inspectors: Codes and More Codes - Cornelius Hunter - March 2010
Excerpt: By now most people know about the DNA code. A DNA strand consists of a sequence of molecules, or letters, that encodes for proteins. Many people do not realize, however, that there are additional, more nuanced, codes associated with the DNA.
Four More DNA Bases? - August 2011
Excerpt: As technology allows us to delve ever deeper into the inner workings of the cell, we continue to find layer-upon-layer of complexity. DNA, in particular, is an incredibly complex information-bearing molecule that bears the hallmarks of design.
Besides multiple layers of 'classical information' embedded in overlapping layers throughout the DNA, there has now been discovered another layer of 'quantum information' embedded throughout the DNA:
Quantum Information In DNA & Protein Folding - short video
Human DNA is like a computer program but far, far more advanced than any software we've ever created.
Bill Gates, The Road Ahead, 1996, p. 188
The Coding Found In DNA Surpasses Man's Ability To Code - Stephen Meyer - video
Stephen Meyer - Excerpted Clip of CBN interview on problems of Craig Venter's Synthetic Life - DNA - Complexity Of The Cell - Layered Information - video
Genetic Entropy - Dr. John Sanford - Evolution vs. Reality (Super Programming in the Genome that 'dwarfs' our computer programs) - video
DNA - Evolution Vs. Polyfuctionality - video
DNA - Poly-Functional Complexity equals Poly-Constrained Complexity
Do you believe Richard Dawkins exists?
Excerpt: DNA is the best information storage mechanism known to man. A single pinhead of DNA contains as much information as could be stored on 2 million two-terabyte hard drives.
Bill Gates, in recognizing the superiority found in Genetic Coding compared to the best computer coding we now have, has now funded research into this area:
Welcome to CoSBi - (Computational and Systems Biology)
Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas.
How DNA Compares To Human Language - Perry Marshall - video
Yet the DNA code is not even reducible to the laws of physics or chemistry:
The Origin of Life and The Suppression of Truth
Excerpt: 'Many claims have been made that nucleotides of DNA have been produced in such “spark and soup” experiments. However, after a careful review of the scientific literature, evolutionist Robert Shapiro stated that the nucleotides of DNA and RNA, "….have never been reported in any amount in such sources, yet a mythology has emerged that maintains the opposite….I have seen several statements in scientific sources which claim that proteins and nucleic acids themselves have been prepared… These errors reflect the operation of an entire belief system… The facts do not support his belief…Such thoughts may be comforting, but they run far ahead of any experimental validation."
Life’s Irreducible Structure
Excerpt: “Mechanisms, whether man-made or morphological, are boundary conditions harnessing the laws of inanimate nature, being themselves irreducible to those laws. The pattern of organic bases in DNA which functions as a genetic code is a boundary condition irreducible to physics and chemistry." Michael Polanyi - Hungarian polymath - 1968 - Science (Vol. 160. no. 3834, pp. 1308 – 1312)
“an attempt to explain the formation of the genetic code from the chemical components of DNA… is comparable to the assumption that the text of a book originates from the paper molecules on which the sentences appear, and not from any external source of information.”
Dr. Wilder-Smith
The Capabilities of Chaos and Complexity - David L. Abel - 2009
Excerpt: "A monstrous ravine runs through presumed objective reality. It is the great divide between physicality and formalism. On the one side of this Grand Canyon lies everything that can be explained by the chance and necessity of physicodynamics. On the other side lies those phenomena than can only be explained by formal choice contingency and decision theory—the ability to choose with intent what aspects of ontological being will be preferred, pursued, selected, rearranged, integrated, organized, preserved, and used. Physical dynamics includes spontaneous non linear phenomena, but not our formal applied-science called “non linear dynamics”(i.e. language,information).
i.e. There are no physical or chemical forces between the nucleotides along the linear axis of DNA (where the information is) that causes the sequence of nucleotides to exist as they do. In fact as far as the foundational laws of the universe are concerned the DNA molecule doesn’t even have to exist at all.
Judge Rules DNA is Unique (and not patentable) Because it Carries Functional Information - March 2010
“Today the idea that DNA carries genetic information in its long chain of nucleotides is so fundamental to biological thought that it is sometimes difficult to realize the enormous intellectual gap that it filled.... DNA is relatively inert chemically.”
Stephen Meyer is interviewed about the "information problem" in DNA, Signature in the Cell - video
The DNA Enigma - The Ultimate Chicken and Egg Problem - Chris Ashcraft - video
The DNA Enigma - Where Did The Information Come From? - Stephen C. Meyer - video
Believing Life's 'Signature in the Cell' an Interview with Stephen Meyer - CBN video
Every Bit Digital DNA’s Programming Really Bugs Some ID Critics - March 2010
Excerpt: In 2003 renowned biologist Leroy Hood and biotech guru David Galas authored a review article in the world’s leading scientific journal, Nature, titled, “The digital code of DNA.”,,, MIT Professor of Mechanical Engineering Seth Lloyd (no friend of ID) likewise eloquently explains why DNA has a “digital” nature: "It’s been known since the structure of DNA was elucidated that DNA is very digital. There are four possible base pairs per site, two bits per site, three and a half billion sites, seven billion bits of information in the human DNA. There’s a very recognizable digital code of the kind that electrical engineers rediscovered in the 1950s that maps the codes for sequences of DNA onto expressions of proteins."
Stephen C. Meyer - Signature In The Cell:
"DNA functions like a software program," "We know from experience that software comes from programmers. Information--whether inscribed in hieroglyphics, written in a book or encoded in a radio signal--always arises from an intelligent source. So the discovery of digital code in DNA provides evidence that the information in DNA also had an intelligent source."
Extreme Software Design In Cells - Stephen Meyer - video
DNA - The Genetic Code - Optimization, Error Minimization & Parallel Codes - Fazale Rana - video
As well as coding optimization, DNA is also optimized to prevent damage from light:
DNA Optimized for Photostability
Excerpt: These nucleobases maximally absorb UV-radiation at the same wavelengths that are most effectively shielded by ozone. Moreover, the chemical structures of the nucleobases of DNA allow the UV-radiation to be efficiently radiated away after it has been absorbed, restricting the opportunity for damage.
The materialist must also account for the overriding complex architectural organization of DNA:
DNA Wrapping (Histone Protein Wrapping to Cell Division)- video
DNA - Replication, Wrapping & Mitosis - video
Dr. Jerry Bergman, "Divine Engineering: Unraveling DNA's Design":
The DNA packing process is both complex and elegant and is so efficient that it achieves a reduction in length of DNA by a factor of 1 million.
DNA Packaging: Nucleosomes and Chromatin
each of us has enough DNA to go from here to the Sun and back more than 300 times, or around Earth's equator 2.5 million times! How is this possible?
It turns out that DNA is also optimized for 'maximally dense packing' as well:
Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome - Oct. 2009
3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009
Scientists' 3-D View of Genes-at-Work Is Paradigm Shift in Genetics - Dec. 2009
Excerpt: Highly coordinated chromosomal choreography leads genes and the sequences controlling them, which are often positioned huge distances apart on chromosomes, to these 'hot spots'. Once close together within the same transcription factory, genes get switched on (a process called transcription) at an appropriate level at the right time in a specific cell type. This is the first demonstration that genes encoding proteins with related physiological role visit the same factory.
Although evolution depends on 'mutations/errors' to DNA to make evolution plausible, there are multiple layers of error correction in the cell to protect against any "random changes" to DNA from happening in the first place:
The Evolutionary Dynamics of Digital and Nucleotide Codes: A Mutation Protection Perspective - February 2011
Excerpt: "Unbounded random change of nucleotide codes through the accumulation of irreparable, advantageous, code-expanding, inheritable mutations at the level of individual nucleotides, as proposed by evolutionary theory, requires the mutation protection at the level of the individual nucleotides and at the higher levels of the code to be switched off or at least to dysfunction. Dysfunctioning mutation protection, however, is the origin of cancer and hereditary diseases, which reduce the capacity to live and to reproduce. Our mutation protection perspective of the evolutionary dynamics of digital and nucleotide codes thus reveals the presence of a paradox in evolutionary theory between the necessity and the disadvantage of dysfunctioning mutation protection. This mutation protection paradox, which is closely related with the paradox between evolvability and mutational robustness, needs further investigation."
Contradiction in evolutionary theory - video - (The contradiction between extensive DNA repair mechanisms and the necessity of 'random mutations/errors' for Darwinian evolution)
The Darwinism contradiction of repair systems
Excerpt: The bottom line is that repair mechanisms are incompatible with Darwinism in principle. Since sophisticated repair mechanisms do exist in the cell after all, then the thing to discard in the dilemma to avoid the contradiction necessarily is the Darwinist dogma.
Repair mechanisms in DNA include:
A proofreading system that catches almost all errors
A mismatch repair system to back up the proofreading system
Photoreactivation (light repair)
Removal of methyl or ethyl groups by O6 – methylguanine methyltransferase
Base excision repair
Nucleotide excision repair
Double-strand DNA break repair
Recombination repair
Error-prone bypass
Scientists Decipher Missing Piece Of First-responder DNA Repair Machine - Oct. 2009
Quantum Dots Spotlight DNA-Repair Proteins in Motion - March 2010
Excerpt: "How this system works is an important unanswered question in this field," he said. "It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It's akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour." Dr. Bennett Van Houten - of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot.
Of note: DNA repair machines ‘Fixing every pothole in America before the next rush hour’ is analogous to the traveling salesman problem. The traveling salesman problem is a NP-hard (read: very hard) problem in computer science; The problem involves finding the shortest possible route between cities, visiting each city only once. ‘Traveling salesman problems’ are notorious for keeping supercomputers busy for days.
NP-hard problem
Since it is obvious that there is not a material CPU (central processing unit) in the DNA, or cell, busily computing answers to this monster logistic problem, in a purely ‘material’ fashion, by crunching bits, then it is readily apparent that this monster ‘traveling salesman problem’, for DNA repair, is somehow being computed by ‘non-local’ quantum computation within the cell and/or within DNA;
Of related interest:
Electric (Quantum) DNA repair - video!
DNA Computer
Excerpt: DNA computers will work through the use of DNA-based logic gates. These logic gates are very much similar to what is used in our computers today with the only difference being the composition of the input and output signals.,,, With the use of DNA logic gates, a DNA computer the size of a teardrop will be more powerful than today’s most powerful supercomputer. A DNA chip less than the size of a dime will have the capacity to perform 10 trillion parallel calculations at one time as well as hold ten terabytes of data. The capacity to perform parallel calculations, much more trillions of parallel calculations, is something silicon-based computers are not able to do. As such, a complex mathematical problem that could take silicon-based computers thousands of years to solve can be done by DNA computers in hours.
further notes:
Researchers discover how key enzyme repairs sun-damaged DNA - July 2010
Excerpt: Ohio State University physicist and chemist Dongping Zhong and his colleagues describe how they were able to observe the enzyme, called photolyase, inject a single electron and proton into an injured strand of DNA. The two subatomic particles healed the damage in a few billionths of a second. "It sounds simple, but those two atomic particles actually initiated a very complex series of chemical reactions," said Zhong,,, "It all happened very fast, and the timing had to be just right."
DNA 'molecular scissors' discovered - July 2010
Excerpt: ' We discovered a new protein, FAN1, which is essential for the repair of DNA breaks and other types of DNA damage.
More DNA Repair Wonders Found - October 2010
Excerpt: This specialized enzyme may attract other repair enzymes to the site, and “speeds up the process by about 100 times.” The enzyme “uses several rod-like helical structures... to grab hold of DNA.”,,, On another DNA-repair front, today’s Nature described a “protein giant” named BRCA2 that is critically involved in DNA repair, specifically targeting the dangerous double-stranded breaks that can lead to serious health consequences
‘How good would each typists have to be, in order to match the DNA’s performance? The answer is almost too ludicrous to express. For what it is worth, every typists would have to have an error rate of about one in a trillion; that is, he would have to be accurate enough to make only a single error in typing the Bible 250,000 times at a stretch. A good secretary in real life has an error rate of about one per page. This is about a billion times the error rate of the histone H4 gene. A line of real life secretaries (without a correcting reference) would degrade the text to 99 percent of its original by the 20th member of the line of 20 billion. By the 10,000 member of the line less than 1 percent would survive. The point near total degradation would be reached before 99.9995% of the typists had even seen it.
Richard Dawkins - The blind watchmaker - Page 123-124
Moreover, the protein machinery that replicates DNA is found to be vastly different in even the most ancient of different single celled organisms:
Did DNA replication evolve twice independently? - Koonin
Excerpt: However, several core components of the bacterial (DNA) replication machinery are unrelated or only distantly related to the functionally equivalent components of the archaeal/eukaryotic (DNA) replication apparatus.
There simply is no smooth 'gradual transition' to be found between these most ancient of life forms, bacteria and archaea, as this following articles and videos clearly point out:
Was our oldest ancestor a proton-powered rock?
Excerpt: In particular, the detailed mechanics of DNA replication would have been quite different. It looks as if DNA replication evolved independently in bacteria and archaea,... Even more baffling, says Martin, neither the cell membranes nor the cell walls have any details in common (between the bacteria and the archaea).
Problems of the RNA World - Did DNA Evolve Twice? - Dr. Fazale Rana - video
An enormous gap exists between prokaryote (bacteria and cyanobacteria) cells and eukaryote (protists, plants and animals) type of cells. A crucial difference between prokaryotes and eukaryotes is the means they use to produce ATP (energy).
Mitochondria - Molecular Machine - Powerhouse Of The Cell - video
On The Non-Evidence For The Endosymbiotic Origin Of The Mitochondria - March 2011
Bacteria Too Complex To Be Primitive Eukaryote Ancestors - July 2010
Excerpt: “Bacteria have long been considered simple relatives of eukaryotes,” wrote Alan Wolfe for his colleagues at Loyola. “Obviously, this misperception must be modified.... There is a whole process going on that we have been blind to.”,,, For one thing, Forterre and Gribaldo revealed serious shortcomings with the popular “endosymbiosis” model – the idea that a prokaryote engulfed an archaea and gave rise to a symbiotic relationship that produced a eukaryote.
On the Origin of Mitochondria: Reasons for Skepticism on the Endosymbiotic Story -
Jonathan M. - January 10, 2012
Materialism simply has no credible answer for how this extreme level of complexity 'accidentally' arose in the first living cell, nor how this extreme integrated complexity found in life randomly evolved to the next 'simple' step of life, and to imagine/believe it can happen by accident, with no compelling evidence to support your position, is not empirical science. In fact, believing in something without any reasonable evidence whatsoever is usually called blind faith.
Even more problematic for evolutionists is that even within the 'bacterial world' there are enormous unexplained gaps of completely unique genes within each different species of bacteria:
ORFan Genes Challenge Common Descent – Paul Nelson – video with references
Because of these insurmountable problems, for generating novel functional proteins, or meaningful DNA, or any meaningfully functional information whatsoever, materialists are trying real hard to sell the 'RNA World' to the general public. Yet, we have absolutely no compelling reason to believe that a hypothetical 'RNA World' will ever start magically generating the massive amounts of complex functional information required for the first life. Here is a sampling of many critiques against the RNA world hypothesis:
Three subsets of sequence complexity and their relevance to biopolymeric information - David L Abel and Jack T Trevors:
Excerpt: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction...No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization...It is only in researching the pre-RNA world that the problem of single-stranded metabolically functional sequencing of ribonucleotides (or their analogs) becomes acute.
Origin of Life: Claiming Something for Almost Nothing (RNA)
Excerpt: Yarus admitted, “the tiny replicator has not been found, and that its existence will be decided by experiments not yet done, perhaps not yet imagined.” But does this (laboratory) work support a naturalistic origin of life? A key question is whether a (self-replicating) molecule could form under plausible prebiotic conditions. Here’s how the paper described their work in the lab to get this (precursor) molecule:,,(several 'unnatural' complex steps are listed)
Stephen Meyer points out that intelligence design was clearly required for even this meager result that Dr. Axe spoke about:
Stephen C. Meyer and Paul A. Nelson
Excerpt: Although Yarus et al. claim that the DRT model undermines an intelligent design explanation for the origin of the genetic code, the model’s many shortcomings in fact illustrate the insufficiency of undirected chemistry to construct the semantic system represented by the code we see today.
Ann Gauger offers a easy to understand outline of Meyer and Nelson's preceding paper:
In BIO-Complexity, Meyer and Nelson Debunk DRT - Ann Gauger
Excerpt: While DNA carries information necessary to build cells, it performs no chemistry and builds no cellular structures by itself. Rather, the information in DNA must be translated into proteins, which then can carry out the various chemical and structural functions of life. But there is no direct way to convert a given DNA sequence into a protein sequence -- no direct chemical association between DNA nucleotides and amino acids. Some sort of decoding mechanism is needed to translate the information encoded in DNA into protein.
That decoding mechanism involves a whole host of enzymes, RNAs and regulatory molecules, all functioning as an elegant, efficient, accurate and complicated system for copying and translating the information in DNA into a usable form.,,, The problem is, this decoding system is self-referential and causally circular. Explaining its origin becomes a chicken and egg problem. As it stands now, you need the machinery that translates DNA into protein in order to make the very same machinery that translates DNA into protein.,,, There is no natural affinity between RNAs, amino acids, and codes. And the origin of life remains inexplicable in materialistic terms.
Materialists have not even created all 4 'letters' of RNA by natural means:
Response to Darrel Falk’s Review of Signature in the Cell - Stephen Meyer - Jan. 2010
Excerpt: Sutherland’s work only partially addresses the first and least severe of these difficulties: the problem of generating the constituent building blocks or monomers in plausible pre-biotic conditions. It does not address the more severe problem of explaining how the bases in nucleic acids (either DNA or RNA) acquired their specific information-rich arrangements.
Stirring the Soup - May 2009
"essentially, the scientists have succeeded in creating a couple of letters of the biological alphabet (in a "thermodynamically uphill" environment). What they need to do now is create the remaining letters, and then show how these letters were able to attach themselves together to form long chains of RNA, and arrange themselves in a specific order to encode information for creating specific proteins, and instructions to assemble the proteins into cells, tissues, organs, systems, and finally, complete phenotypes."
Uncommon Descent - C Bass:
Scientists Say Intelligent Designer Needed for Origin of Life Chemistry
Excerpt: Organic chemist Dr. Charles Garner recently noted in private correspondence that "while this work helps one imagine how RNA might form, it does nothing to address the information content of RNA. So, yes, there was a lot of guidance by an intelligent chemist." Sutherland's research produced only 2 of the 4 RNA nucleobases, and Dr. Garner also explained why, as is often the case, "the basic chemistry itself also required the hand of an intelligent chemist."
Meyer Responds to Stephen Fletcher - Stephen Meyer - March 2010
Excerpt: Nevertheless, this work does nothing to address the much more acute problem of explaining how the nucleotide bases in DNA or RNA acquired their specific information-rich arrangements, which is the central topic of my book (Signature In The Cell). In effect, the Powner (Sutherland) study helps explain the origin of the “letters” in the genetic text, but not their specific arrangement into functional “words” or “sentences.”
Deflating the synthetic proofs of the RNA World - David Tyler - August 2011
Excerpt: There may be a consensus about the RNA World, but it is not a consensus based on evidence. The approach is supported by synthetic proofs drawn from unrealistic laboratory experiments, showing all the signs of a dogmatism that pastes its ideas on to nature.
Here are some more critiques of the 'RNA World' scenario:
Did Life Begin in an RNA World?
Self Replication and Perpetual Motion - The Second Law's Take On The RNA World
Chemistry by Chance: A Formula for Non-Life by Charles McCombs, Ph.D.
Excerpt: The following eight obstacles in chemistry ensure that life by chance is untenable.
1. The Problem of Unreactivity
2. The Problem of Ionization
3. The Problem of Mass Action
4. The Problem of Reactivity
5. The Problem of Selectivity
6. The Problem of Solubility
7. The Problem of Sugar
8. The Problem of Chirality
The RNA World: A Critique - Gordon Mills and Dean Kenyon:
OOL (Origin Of Life) on the Rocks:
New Scientist Weighs in on the Origin of Life - Jonathan M. - August 17, 2011
Excerpt: To conclude, Michael Marshall's New Scientist article does not even come close to demonstrating the feasibility of the RNA world hypothesis, much less the origin of the sequence-specific information necessary for even the simplest of biological systems.
The Origin of Life: An RNA World? - Jonathan M. - August 22, 2011 (Refutation of Nick Matzke)
Excerpt Summary & Conclusion
We have explored just a small handful of the confounding difficulties confronting the chemical origin of life. This is not a god-of-the-gaps argument, as Matzke claims, but rather a positive argument, based on our uniform and repeated experience of cause-and-effect. It is not based on what we don't know, but on what we do know: that intelligence is a necessary and sufficient condition for the production of novel complex and functionally specified information. The design inference is based on sound and conventional scientific methodology. It utilizes the historical or abductive method and infers to the best explanation from multiple competing hypotheses.
Origin of Life: Claiming Something for Almost Nothing - March 2010
Excerpt: A look through the paper, however, shows complex lab procedures that are hard to justify in nature. (intelligence is required for even this meager step),,, the problem of sequencing the nucleotides – the key question – has not been addressed. Where did the genetic code come from? One ribozyme is not a code.
Since the RNA-World has so many insurmountable problems, some evolutionists have tried the 'metabolism first' scenario to try get past the gargantuan probabilistic hurdles facing the origin of life 'problem'. But yet again the evolutionists have failed miserably:
A realistic look at the preceding paper is found here:
Metabolism-First Origin of Life Won’t Work
Excerpt: "“We do not know how the transition to digitally encoded information has happened in the originally inanimate world; that is, we do not know where the RNA world might have come from,"
Douglas Axe also comments on the results of the preceding study here:
Other leading researchers find the metabolism first scenario wholly implausible:
“Pigs don’t fly”
Even leading 'new atheist' Richard Dawkins admits no one has a clue how the first living cell 'evolved':
Leading Darwinist Richard Dawkins Dodges Debates, Refuses to Defend Evolution - Stephen Meyer
"(Richard) Dawkins says that there is no evidence for intelligent design in life, and yet he also acknowledges that neither he nor anyone else has an evolutionary explanation for the origin of the first living cell. We know now even the simplest forms of life are chock-full of digital code, complex information processing systems and other exquisite forms of nanotechnology."
In realizing the staggering impossibilities presented for any conceivable origin of life scenario, some materialists, including Francis Crick the co-discover of the DNA helix, have, in my opinion, completely left the field of experimental biology and suggested pan-spermia, the theory pre-biotic amino acids, or life itself, came to earth from outer-space on comets, or even delivered by UFO's, to account for this sudden appearance of life on earth.
Richard Dawkins Vs. Ben Stein - The UFO Interview - video
The panspermia hypothesis, which is really born out of sheer desperation rather than any sound reason on the materialist part, has several problems. One problem is astronomers, using spectral analysis, have not found any vast reservoirs of biological molecules anywhere they have looked in the universe (Ross; Creation as Science). Another problem is, even if comets were nothing but pre-biotic amino acid snowballs, how are the amino acids going to molecularly survive the furnace-like temperatures generated when the comet crashes into the earth?
Botching Evolutionary Science - Casey Luskin - April 2009
Excerpt: Of course, the textbook makes no mention of studies which have shown that such impacts would likely vaporize organic molecules carried to earth. (See Edward Anders. “Pre-biotic organic matter from comets and asteroids.” Nature, Vol. 342:255-257 (1989).
Dr. Hugh Ross has surmised delivery by meteorites or comets has now effectively been ruled out because of the homochirality problem of finding only the 'left handed' amino acids needed to build life in the universe somewhere:
"Circularly polarized UV light only produces a 17% excess (of R or L-amino-acids) and such selective destruction of organics require monochromatic light (monochromatic light isn’t known to occur naturally anywhere in the universe). So directed panspermia (life delivered by UFO's) is their last resort." Hugh Ross
I would like to reiterate, materialism postulated a very simple first cell, yet the simplest cell scientists have been able to find on earth, which can't even be seen with the naked eye, is vastly more complex than any machine man has ever made through concerted effort. This is especially true since a cell can self-replicate with seeming ease whereas a machine cannot. This following site has a interactive graph that lets people look into this 'invisible' world of microbes:
CELL TO CARBON ATOM - SIZE AND SCALE - Interactive Graph - Move cursor at the bottom of graph to the right to reduce the size:
Here is a neat little video clip that I wish was a bit longer (they say a longer one is in the works):
The Flow – Resonance Film – video
Description: The Flow, from inside a cell, looks at the supervening layers of reality that we can observe, from quarks to nucleons to atoms and beyond. The deeper we go into the foundations of reality the more it loses its form, eventually becoming a pure mathematical conception.
The smallest cyano-bacterium known to science has hundreds of millions of individual atomic molecules (not counting water molecules), divided into nearly a thousand completely distinct atomic molecule types; and a genome (DNA sequence) of 1.8 million bits, with over a million individual protein molecules which are sub-divided into hundreds of distinct protein classes. Once again, the integrated complexity found in the simplest bacterium known to science easily outclasses the integrated complexity of any machine man has ever made. These following articles and videos make this point clear:
"The manuals needed for building the entire space shuttle and all its components and all its support systems would be truly enormous! Yet the specified complexity (information) of even the simplest form of life - a bacterium - is arguably as great as that of the space shuttle."
J.C. Sanford - Geneticist - Genetic Entropy and the Mystery Of the Genome
'The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica."Carl Sagan, "Life" in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
Ben Stein - EXPELLED - The Staggering Complexity Of The Cell - video
“Although the tiniest living things known to science, bacterial cells, are incredibly small (10^-12 grams), each is a veritable micro-miniaturized factory containing thousands of elegantly designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machine built by man and absolutely without parallel in the non-living world”. Michael Denton, "Evolution: A Theory in Crisis," 1986, p. 250.
The Cell as a Collection of Protein Machines
Bruce Alberts: Former President, National Academy of Sciences;
The Cell - A World Of Complexity Darwin Never Dreamed Of - Donald E. Johnson - video
Bioinformatics: The Information in Life - Donald Johnson - video
On this episode of ID the Future, Casey Luskin interviews Dr. Donald E. Johnson about his 2010 book, Programming of Life, which compares the workings of biology to a computer.
Programming of Life - February 2012 - podcast
Here is the video that goes with the 'Programming Of Life' book:
Programming of Life - video
Here is Dr. Johnson's Home Page;
Science Integrity - Exposing Unsubstantiated Science Claims
On a slide in the preceding video, entitled 'Information Systems In Life', Dr. Johnson points out that:
* the genetic system is a pre-existing operating system;
* the specific genetic program (genome) is an application;
* the native language has codon-based encryption system;
* the codes are read by enzyme computers with their own operating system;
* each enzyme’s output is to another operating system in a ribosome;
* codes are decrypted and output to tRNA computers;
* each codon-specified amino acid is transported to a protein construction site; and
* in each cell, there are multiple operating systems, multiple programming languages, encoding/decoding hardware and software, specialized communications systems, error detection/correction systems, specialized input/output for organelle control and feedback, and a variety of specialized “devices” to accomplish the tasks of life.
Cells Are Like Robust Computational Systems, - June 2009
Excerpt: Gene regulatory networks in cell nuclei are similar to cloud computing networks, such as Google or Yahoo!, researchers report today in the online journal Molecular Systems Biology. The similarity is that each system keeps working despite the failure of individual components, whether they are master genes or computer processors. ,,,,"We now have reason to think of cells as robust computational devices, employing redundancy in the same way that enables large computing systems, such as Amazon, to keep operating despite the fact that servers routinely fail."
Nanoelectronic Transistor Combined With Biological Machine Could Lead To Better Electronics: - Aug. 2009
Paramecium caudatum can communicate with neighbors using a non-molecular method, probably photons. The cell populations were separated either with glass allowing photon transmission from 340 nm to longer waves, or quartz being transmittable from 150 nm, i.e. from UVlight to longer waves. Energy uptake, cell division rate and growth correlation were influenced.
Systems biology: Untangling the protein web - July 2009
Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening."
Simulations reveal new information about the gateway to the cell nucleus
Life Leads the Way to Invention - Feb. 2010
This stunning energy efficiency of a cell is found across all life domains, thus strongly suggesting that all life on earth was Intelligently Design for maximal efficiency instead of accidentally, and gradually, evolved:
Excerpt: Here, using the largest database to date, for 3,006 species that includes most of the range of biological diversity on the planet—from bacteria to elephants, and algae to sapling trees—we show that metabolism displays a striking degree of homeostasis across all of life.
Also of interest is that a cell apparently seems to be successfully designed along the very stringent guidelines laid out by Landauer's principle of 'reversible computation' in order to achieve such amazing energy efficiency, something man has yet to accomplish in any meaningful way for computers:
Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon - Charles H. Bennett
Further quotes on the unmatched complexity of the cell:
“Each cell with genetic information, from bacteria to man, consists of artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction and a capacity not equaled in any of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours" Geneticist Michael Denton PhD. Evolution: A Theory In Crisis pg. 329
"To grasp the reality of life as it has been revealed by molecular biology, we must first magnify a cell a thousand million times until it is 20 kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would see then would be an object of unparalleled complexity,...we would find ourselves in a world of supreme technology and bewildering complexity."
Geneticist Michael Denton PhD., Evolution: A Theory In Crisis, pg.328
Building a Cell: Staggering Complexity: - Feb. 2010
Excerpt: “All organisms, from bacteria to humans, face the daunting task of replicating, packaging and segregating up to two metres (about 6 x 10^9 base pairs) of DNA when each cell divides,” “,,,the segregation machinery must function with far greater accuracy than man-made machines and with an exquisitely soft touch to prevent the DNA strands from breaking.” Bloom and Joglekar talked “machine language” over and over. The cell has specialized machines for all kinds of tasks: segregation machines, packaging machines, elaborate machines, streamlined machines, protein translocation machines, DNA-processing machines, DNA-translocation machines, robust macromolecular machines, accurate machines, ratchets, translocation pumps, mitotic spindles, DNA springs, coupling devices, and more. The authors struggle to “understand how these remarkable machines function with such exquisite accuracy.”
Here is a good article that came out in GN magazine in Nov. 2009:
10 Ways Darwin Got It Wrong
Excerpt: As molecular biologist Jonathan Wells and mathematician William Dembski point out: “It’s true that eukaryotic cells are the most complicated cells we know. But the simplest life forms we know, the prokaryotic cells (such as bacteria, which lack a nucleus), are themselves immensely complex.,,, There is no evidence whatsoever of earlier, more primitive life forms from which prokaryotes might have evolved” (How to Be an Intellectually Fulfilled Atheist (or Not), 2008, p. 4). These authors then mention what these two types of cells share in terms of complexity:
• Information processing, storage and retrieval.
• Artificial languages and their decoding systems.
• Error detection, correction and proofreading devices for quality control.
• Digital data-embedding technology.
• Transportation and distribution systems.
• Automated parcel addressing (similar to zip codes and UPS labels).
• Assembly processes employing pre-fabrication and modular construction.
• Self-reproducing robotic manufacturing plants.
So it turns out that cells are far more complex and sophisticated than Darwin could have conceived of. How did mere chance produce this, when even human planning and engineering cannot?
There simply is no "simple life" on earth as materialism had presumed - even the well known single celled amoeba has the complexity of the city of London and reproduces that complexity in only 20 minutes.
Programming of Life - Biological Computers - video
The inner life of a cell - Harvard University - video
User's guide to the video
Dr. Fazale (Fuz) Rana discusses the beauty and elegance of biochemistry - video
Here are some fairly simple overviews of the cell:
How the Body Works: The Cell - video
Programming of Life - Eukaryotic Cell - video
Map Of Major Metabolic Pathways In A Cell - Diagram
Glycolysis and the Citric Acid Cycle: The Control of Proteins and Pathways - Cornelius Hunter - July 2011
Metabolism: A Cascade of Design
Excerpt: A team of biological and chemical engineers wanted to understand just how robust metabolic pathways are. To gain this insight, the researchers compared how far the errors cascade in pathways found in a variety of single-celled organisms with errors in randomly generated metabolic pathways. They learned that when defects occur in the cell’s metabolic pathways, they cascade much shorter distances than when errors occur in random metabolic routes. Thus, it appears that metabolic pathways in nature are highly optimized and unusually robust, demonstrating that metabolic networks in the protoplasm are not haphazardly arranged but highly organized.
Making the Case for Intelligent Design More Robust
Excerpt: ,,, In other words, metabolic pathways are optimized to withstand inevitable concentration changes of metabolites.
Wonders of the Cell - 2008 - Christopher Wayne Ashcraft - video
Primary Cilium As Cellular 'GPS System' Crucial To Wound Repair
Mere Biochemistry: Cell Division Involves Thousands of Complex, Interacting Parts - September 2010
Astonishingly, actual motors, which far surpass man-made motors in 'engineering parameters', are now being found inside 'simple cells'.
Articles and Videos on Molecular Motors
Michael Behe - Life Reeks Of Design - 2010 - video
Macroevolution, Good Science, and Redeeming Mathematics - Kate Deddens - February 2012
Excerpted quote: As obviously designed as a spaceship or a computer…Evolutionary biologists have been able to pretend to know how complex biological systems originated only because they treated them as black boxes. Now that biochemists have opened the black boxes and see what is inside, they know the Darwinian theory is just a story, not a scientific explanation…
(Phillip E. Johnson, Defeating Darwinism, Downers Grove, IL: InterVarsity Press, 1997, 77-78.)
James Shapiro - Molecular Biologist
The following expert doesn't even hide his very unscientific preconceived philosophical bias against intelligent design,,,
*Professor Emeritus of Biochemistry, Colorado State University, USA
Michael Behe - No Scientific Literature For Evolution of Any Irreducibly Complex Molecular Machines
David Ray Griffin - retired professor of philosophy of religion and theology
What I find very persuasive, to the suggestion that the universe was designed with life in mind, is that physicists find many processes in a cell operate at the 'near optimal' capacities allowed in any physical system:
William Bialek - Professor Of Physics - Princeton University:
Excerpt: "A central theme in my research is an appreciation for how well things “work” in biological systems. It is, after all, some notion of functional behavior that distinguishes life from inanimate matter, and it is a challenge to quantify this functionality in a language that parallels our characterization of other physical systems. Strikingly, when we do this (and there are not so many cases where it has been done!), the performance of biological systems often approaches some limits set by basic physical principles. While it is popular to view biological mechanisms as an historical record of evolutionary and developmental compromises, these observations on functional performance point toward a very different view of life as having selected a set of near optimal mechanisms for its most crucial tasks.,,,The idea of performance near the physical limits crosses many levels of biological organization, from single molecules to cells to perception and learning in the brain,,,,"
Physicists Finding Perfection… in Biology — June 1st, 2009 by Biologic Staff
Excerpt: "biological processes tend to be optimal in cases where this can be tested."
Also of note: There is a fairly substantial economic payoff to be had for presupposing superior 'Intelligent Design' in life, as is testified to by the burgeoning field of Biomimicry:
Biomimicry - Superior Designs That Were Found In Life
Also of note; Sometimes evolutionists will point to the Rubisco Enzyme as an example of 'bad design', but it turns out the Rubisco Enzyme is indeed optimal for the purpose to which it was created for supporting higher life forms above it. Higher life forms that the Rubisco is not aware of, nor cares about.
Rubisco is not an example of unintelligent design - David Tyler
Excerpt: Rubisco's ability to capture CO2 increases with increasing CO2 content in the atmosphere, so its efficiency rises in a CO2-rich atmosphere. However, increasing oxygen levels in the atmosphere will reduce Rubisco's ability to capture carbon. So a negative feedback mechanism exists to regulate the relative concentrations of oxygen and carbon dioxide in the atmosphere. This is another example of design affecting the Earth's ecology,,
From 3.8 to .6 billion years ago photosynthetic bacteria, and sulfate-reducing reducing bacteria, dominated the geologic and fossil record (that’s over 80% of the entire time life has existed on earth). The geologic and fossil record also reveals, during this time, a large portion of these very first bacterial life-forms lived in irreducibly complex, symbiotic, mutually beneficial, colonies called Stromatolites. Stromatolites are rock like structures the photo-synthetic bacteria built up over many years, much like coral reefs are slowly built up over many years by the tiny creatures called corals. Although Stromatolites are not nearly as widespread as they once were, they are still around today in a few sparse places like Shark’s Bay Australia.
Michael Denton - Stromatolites Are Extremely Ancient - video
Shark's Bay - Modern Stromatolites - Pictures
Both the oldest Stromatolite fossils, and the oldest bacterium fossils, found on earth demonstrate an extreme conservation of morphology which, very contrary to evolutionary thought, simply means they have not changed and look very similar to Stromatolites and bacteria of today.
Odd Geometry of Bacteria May Provide New Way to Study Earth's Oldest Fossils - May 2010
Excerpt: Known as stromatolites, the layered rock formations are considered to be the oldest fossils on Earth.,,,That the spacing pattern corresponds to the mats' metabolic period -- and is also seen in ancient rocks -- shows that the same basic physical processes of diffusion and competition seen today were happening billions of years ago,,,
Everything new is old again: Photosynthesis from 3.3 billion years ago - July 2011
Excerpt: The most direct evidence yet for ancient photosynthesis has been uncovered in a fossil of a matted carpet of microbes that lived on a beach 3.3 billion years ago.
Excerpt: These (fossilized bacteria) cells are actually very similar to present day cyanobacteria. This is not only true for an isolated case but many living genera of cyanobacteria can be linked to fossil cyanobacteria. The detail noted in the fossils of this group gives indication of extreme conservation of morphology, more extreme than in other organisms.
Static evolution: is pond scum the same now as billions of years ago?
Excerpt: But what intrigues (paleo-biologist) J. William Schopf most is lack of change. Schopf was struck 30 years ago by the apparent similarities between some 1-billion-year-old fossils of blue-green bacteria and their modern microbial microbial. "They surprisingly looked exactly like modern species," Schopf recalls. Now, after comparing data from throughout the world, Schopf and others have concluded that modern pond scum differs little from the ancient blue-greens. "This similarity in morphology is widespread among fossils of [varying] times," says Schopf. As evidence, he cites the 3,000 such fossils found;
Bacteria: Fossil Record - Ancient Compared to Modern - Picture
Contrary to what materialism would expect, these very first photosynthetic bacteria found in the fossil record, and by chemical analysis of the geological record, are shown to have been preparing the earth for more advanced life to appear from the very start of their existence by producing the necessary oxygen for higher life-forms to exist, and by reducing the greenhouse gases of earth’s early atmosphere. Photosynthetic bacteria slowly removed the carbon dioxide, and built the oxygen up, in the earth’s atmosphere primarily by this following photosynthetic chemical reaction:
The above chemical equation translates as:
Six molecules of water plus six molecules of carbon dioxide produce one molecule of sugar plus six molecules of oxygen
Interestingly, the gradual removal of greenhouse gases corresponded to the gradual 15% increase of light and heat coming from the sun during that time (Ross; Creation as Science). This 'lucky' correspondence of the slow increase of heat from the sun with the same perfectly timed slow removal of greenhouse gases from the earth’s atmosphere was necessary to keep the earth from cascading into either a 'greenhouse earth' or 'snowball earth'.
Why Didn't Early Earth Freeze? The Mystery Deepens - April 2010
Excerpt: The results were "very surprising," Rosing says. As to the question of what kept the planet warm instead of CO2, he says his research points to two possibilities. First, Earth's land masses were much smaller billions of years ago, meaning that the oceans, which generally are darker than continents, could absorb more of the sun's heat. Second, because life was brand new, organisms were manufacturing little of the gases that help clouds form. So, more sunlight reached the surface.,, There are bound to be other factors, Rosing says. "I think that our paper is just one link in a long chain of further refinements of our understanding of the early Earth and of the dynamics of our planet."
This following paper offers methane gas as a possible contributing solution to the faint sun paradox:
Methane-Based Greenhouse and Anti-Greenhouse Events Led to Stable Archean Climate
This following paper shows that the Earth's early atmosphere would have been stripped away by the sun if it had not been finely tuned:
Earth’s Primordial Atmosphere Must Be Fine-Tuned - Hugh Ross
Excerpt: The team then produced calculations demonstrating that the only reasonable scenario for explaining why the Sun’s radiation did not remove Earth’s primordial atmosphere was that the early Earth’s atmosphere was at least a hundred times richer in carbon dioxide.
Thus following study shows that the buildup of oxygen in the atmosphere was more gradual than previously thought;
Rise of Atmospheric Oxygen More Complicated (Gradual) Than Previously Thought - December 2011
More interesting still, the byproducts of the complex biogeochemical processes involved in the oxygen production by these early bacteria are (red banded) iron formations, limestone, marble, gypsum, phosphates, sand, and to a lesser extent, coal, oil and natural gas (note; though some coal, oil and natural gas deposits are from this early era of bacterial life, most coal, oil and natural gas deposits originated on earth after the Cambrian explosion of higher life forms some 540 million years ago). The resources produced by these early photosynthetic bacteria are very useful, one could even very well say 'necessary', for the technologically advanced civilizations of humans today to exist.
The following video is good for seeing just how far back the red banded iron formations really go (3.8 billion years ago). But be warned, Dr. Newman operates from a materialistic worldview and makes many unwarranted allusions of the 'magical' power of evolution to produce photosynthetic bacteria. Although to be fair, she does readily acknowledge the staggering level of complexity being dealt with in photosynthesis, as well as admitting that no one really has a clue how photosynthesis 'evolved'.
Exploring the deep connection between bacteria and rocks - Dianne Newman - MIT lecture video
This following papers back up Dr. Newman's assertion of extremely ancient oxygenic photosynthesis with other lines of evidence:
Ancient Microbes Responsible for Breathing Life Into Ocean 'Deserts' - August 2010
Excerpt: Brian Kendall and Ariel Anbar, together with colleagues at other institutions, show that "oxygen oases" in the surface ocean were sites of significant oxygen production long before the breathing gas began to accumulate in the atmosphere..,, What Kendall discovered was a unique relationship of high rhenium and low molybdenum enrichments in the samples from South Africa, pointing to the presence of dissolved oxygen on the seafloor itself.,,, "It was especially satisfying to see two different geochemical methods -- rhenium and molybdenum abundances and Fe chemistry -- independently tell the same story," Kendall noted. Evidence that the atmosphere contained at most minute amounts of oxygen came from measurements of the relative abundances of sulfur (S) isotopes.
Breathing new life into Earth: New research shows evidence of early oxygen on our planet - August 2011
These following articles explore some of the other complex geochemical processes that are also involved in the forming of the red banded iron, and other precious ore, formations on the ancient earth.
Banded Rocks Reveal Early Earth Conditions, Changes
Rich Ore Deposits Linked to Ancient Atmosphere - Nov. 2009
Interestingly, while the photo-synthetic bacteria were reducing greenhouse gases and producing oxygen, and metal, and minerals, which would all be of benefit to modern man, 'sulfate-reducing' bacteria were also producing their own natural resources which would be very useful to modern man. Sulfate-reducing bacteria helped prepare the earth for advanced life by detoxifying the primeval earth and oceans of poisonous levels of heavy metals while depositing them as relatively inert metal ores. Metal ores which are very useful for modern man, as well as fairly easy for man to extract today (mercury, cadmium, zinc, cobalt, arsenic, chromate, tellurium and copper to name a few). To this day, sulfate-reducing bacteria maintain an essential minimal level of these heavy metals in the ecosystem which are high enough so as to be available to the biological systems of the higher life forms that need them yet low enough so as not to be poisonous to those very same higher life forms.
Bacterial Heavy Metal Detoxification and Resistance Systems:
The role of bacteria in hydrogeochemistry, metal cycling and ore deposit formation:
Textures of sulfide minerals formed by SRB (sulfate-reducing bacteria) during bioremediation (most notably pyrite and sphalerite) have textures reminiscent of those in certain sediment-hosted ores, supporting the concept that SRB may have been directly involved in forming ore minerals.
Researchers Identify Mysterious Life Forms in the Extreme Deep Sea
Excerpt: Xenophyophores are noteworthy for their size, with individual cells often exceeding 10 centimeters (4 inches), their extreme abundance on the seafloor and their role as hosts for a variety of organisms.,,, The researchers spotted the life forms at depths up to 10,641 meters (6.6 miles) within the Sirena Deep of the Mariana Trench.,,, Scientists say xenophyophores are the largest individual cells in existence. Recent studies indicate that by trapping particles from the water, xenophyophores can concentrate high levels of lead, uranium and mercury,,,
Man has only recently caught on to harnessing the ancient detoxification ability of bacteria to cleanup his accidental toxic spills, as well as his toxic waste, from industry:
What is Bioremediation? - video
Metal-mining bacteria are green chemists - Sept. 2010
Further note:
Arsenic removal: research on bioremediation using arsenite-eating bacteria
As a side note to this, recently bacteria surprised scientists by their ability to quickly detoxify the millions of barrels of oil spilled in the Gulf of Mexico:
Mighty oil-eating microbes help clean up the Gulf - July 2010
Excerpt: Where is all the oil? Nearly two weeks after BP finally capped the biggest oil spill in U.S. history, the oil slicks that once spread across thousands of miles of the Gulf of Mexico have largely disappeared. Nor has much oil washed up on the sandy beaches and marshes along the Louisiana coast.,,, The lesson from past spills is that the lion’s share of the cleanup work is done by nature in the form of oil-eating bacteria and fungi. (Thank God)
Deepwater Oil Plume in Gulf Degraded by Microbes, Study Shows
Excerpt: An intensive study by scientists with the Lawrence Berkeley National Laboratory (Berkeley Lab) found that microbial activity degrades oil much faster than anticipated. This degradation appears to take place without a significant level of oxygen depletion.
Methane Gas Concentrations in Gulf of Mexico Quickly Returned to Near-Normal Levels, Surprising Researchers - January 2011
Excerpt: Calling the results "extremely surprising", researchers report that methane gas concentrations in the Gulf of Mexico have returned to near normal levels only months after a massive release occurred following the Deepwater Horizon oil rig explosion.
Microbes Consumed Oil in Gulf Slick at Unexpected Rates, Study Finds - August 2011
Excerpt: "Our study shows that the dynamic microbial community of the Gulf of Mexico supported remarkable rates of oil respiration, despite a dearth of dissolved nutrients," the researchers said. Edwards added that the results suggest "that microbes had the metabolic potential to break down a large portion of hydrocarbons and keep up with the flow rate from the wellhead."
Here are a couple of sites showing the crucial link of a minimal levels of metals to biological life:
Transitional Metals And Cytochrome C oxidase - Michael Denton - Nature's Destiny
Proteins prove their metal - July 2010
Your Copper Pipes - November 2011
As well, in conjunction with bacteria, geological processes helped detoxify the earth of dangerous levels of metal:
The Concentration of Metals for Humanity's Benefit:
Excerpt: They demonstrated that hydrothermal fluid flow could enrich the concentration of metals like zinc, lead, and copper by at least a factor of a thousand. They also showed that ore deposits formed by hydrothermal fluid flows at or above these concentration levels exist throughout Earth's crust. The necessary just-right precipitation conditions needed to yield such high concentrations demand extraordinary fine-tuning. That such ore deposits are common in Earth's crust strongly suggests supernatural design.
And on top of the fact that poisonous heavy metals on the primordial earth were brought into 'life-enabling' balance by complex biogeochemical processes, there was also an explosion of minerals on earth which were a result of that first life, as well as being a result of each subsequent 'Big Bang of life' there afterwards.
The Creation of Minerals:
"Today there are about 4,400 known minerals - more than two-thirds of which came into being only because of the way life changed the planet. Some of them were created exclusively by living organisms" - Bob Hazen - Smithsonian - Oct. 2010, pg. 54
To put it mildly, this minimization of poisonous elements, and 'explosion' of useful minerals, is strong evidence for Intelligently Designed terra-forming of the earth that 'just so happens' to be of great benefit to modern man.
Clearly many, if not all, of these metal ores and minerals laid down by these sulfate-reducing bacteria, as well as laid down by the biogeochemistry of more complex life, as well as laid down by finely-tuned geological conditions throughout the early history of the earth, have many unique properties which are crucial for technologically advanced life, and are thus indispensable to man’s rise above the stone age to the advanced 'space-age' technology of modern civilization.
Minerals and Their Uses
Mineral Uses In Industry
Inventions: Elements and Compounds - video
Bombardment Makes Civilization Possible
What is the common thread among the following items: pacemakers, spark plugs, fountain pens and compass bearings? Give up? All of them currently use (or used in early versions) the two densest elements, osmium and iridium. These two elements play important roles in technological advancements. However, if certain special events hadn't occurred early in Earth's history, no osmium or iridium would exist near the planet's surface.
As well, many types of bacteria in earth's early history lived in what are called cryptogamic colonies on the earth's primeval continents. These colonies dramatically transformed the primeval land into stable nutrient filled soils which were receptive for future advanced vegetation to appear.
Land organisms from Cambrian found in soil layer under the soil - November 2011
Excerpt: Other evidence of life on land includes quilted spheroids (Erytholus globosus gen. et sp. nov.) and thallose impressions (Farghera sp. indet.), which may have been slime moulds and lichens, respectively. These distinctive fossils in Cambrian palaeosols represent communities comparable with modern biological soil crusts.
Cryptobiotic Soils: Holding the Place in Place
Excerpt: Cryptobiotic soil crusts, consisting of soil cyanobacteria, lichens and mosses, play an important ecological roles,,, Cryptobiotic crusts increase the stability of otherwise easily eroded soils, increase water infiltration in regions that receive little precipitation, and increase fertility in soils often limited in essential nutrients such as nitrogen and carbon (Harper and Marble, 1988; Johansen, 1993; Metting, 1991; Belnap and Gardner, 1993; Belnap, 1994; Williams et al., 1995).
Bacterial 'Ropes' Tie Down Shifting Southwest
Excerpt: When moistened, cyanobacteria become active, moving through the soil and leaving a trail of sticky material behind. The sheath material sticks to surfaces such as rock or soil particles, forming an intricate web of fibers throughout the soil. In this way, loose soil particles are joined together, and an otherwise unstable surface becomes very resistant to both wind and water erosion.
Moreover, worms, in addition to their critical role for soil aeration, are also found to detoxify the soils of poisonous heavy metals:
The worm that turned on heavy metal - December 2010
Excerpt: The team has carried out two feasibility studies on the use of worms in treating waste. The team first used compost produced by worms, vermicompost, as a successful adsorbent substrate for remediation of wastewater contaminated with the metals nickel, chromium, vanadium and lead. The second used earthworms directly for remediation of arsenic and mercury present in landfill soils and demonstrated an efficiency of 42 to 72% in approximately two weeks for arsenic removal and 7.5 to 30.2% for mercury removal in the same time period.
Materialism simply has no coherent answers for why these different bacterial types, biogeochemical processes, and worms etc.., would start working in precise concert with each other preparing the earth for future life to appear from the very start of their first appearance on earth.
In further related note, several different types of bacteria are found to be integral for the nitrogen fixation cycle required for plants:
nitrogen fixation - illustration
nitrogen fixation - video:
Just how crucial, and finely tuned, the nitrogen cycle is is revealed by this following study:
Engineering and Science Magazine - Caltech - March 2010
Excerpt: “Without these microbes, the planet would run out of biologically available nitrogen in less than a month,” Realizations like this are stimulating a flourishing field of “geobiology” – the study of relationships between life and the earth. One member of the Caltech team commented, “If all bacteria and archaea just stopped functioning, life on Earth would come to an abrupt halt.” Microbes are key players in earth’s nutrient cycles. Dr. Orphan added, “...every fifth breath you take, thank a microbe.”
Planet's Nitrogen Cycle Overturned - Oct. 2009
Excerpt: "Ammonia is a waste product that can be toxic to animals.,,, archaea can scavenge nitrogen-containing ammonia in the most barren environments of the deep sea, solving a long-running mystery of how the microorganisms can survive in that environment. Archaea therefore not only play a role, but are central to the planetary nitrogen cycles on which all life depends.,,,the organism can survive on a mere whiff of ammonia – 10 nanomolar concentration, equivalent to a teaspoon of ammonia salt in 10 million gallons of water."
Novel Nitrogen Uptake Design - Oct. 2009
Excerpt: The exceptionality of the snow roots and their nitrogen-capturing machinery, their extraordinarily complex designs, and their optimal efficiency qualifies them as evidence, not for evolution, but rather for supernatural design.
Arbuscular Mycorrhizal Fungi Design
Excerpt: The mutual relationship between vascular plants (flowering plants) and arbuscular mycorrihizal fungi (AMF) is the most prevalent known plant symbiosis. Vascular plants provide sites all along their root systems where colonies of AMF can assemble and feed on the nutrients supplied by the plants. In return, the AMF supply phosphorus, nitrogen, and carbon in molecular forms that the vascular plants can readily assimilate. The (overwhelming) challenge for evolutionary models is how to explain by natural means the simultaneous appearance of both vascular plants and AMF.
Of somewhat related interest to this topic, it is found that colonies of bacteria have some mysterious way of communicating essential information very quickly amongst themselves:
Electrical Communication in Bacteria - August 2010
Excerpt: These responses occurred too quickly for any sort of chemical exchange or molecular process such as osmosis, says Nielsen. The most plausible option, his team reports in the 25 February issue of Nature, is that the bacteria are somehow communicating electrically by transmitting electrons back and forth. How exactly they do this is unclear,
Moreover, the overall principle of long term balanced symbiosis, which is in fact what we have with the overall biogeochemical cycles of the earth, is a very anti-random chance fact which pervades the entire ecology of our planet and points powerfully to the intentional craftsmanship of a Designer:
Intelligent Design - Symbiosis and the Golden Ratio - video
God's Creation - Symbiotic (Cooperative) Relationships - video
Some Trees 'Farm' Bacteria to Help Supply Nutrients - July 2010
Since oxygen readily reacts and bonds with many of the solid elements making up the earth itself, and since the slow process of tectonic activity controls the turnover of the earth's crust, it took photosynthetic bacteria a few billion years before the earth’s crust was saturated with enough oxygen to allow a sufficient level of oxygen to be built up in the atmosphere as to allow higher life:
New Wrinkle In Ancient Ocean Chemistry - Oct. 2009
Increases in Oxygen Prepare Earth for Complex Life
Excerpt: We at RTB argue that any mechanism exhibiting complex, integrated actions that bring about a specified outcome is designed. Studies of Earth’s history reveal highly orchestrated interplay between astronomical, geological, biological, atmospheric, and chemical processes that transform the planet from an uninhabitable wasteland to a place teeming with advanced life. The implications of design are overwhelming.
As well, Plate tectonics are also shown to be finely-tuned and thus tied to the 'terra forming', intelligent design, perspective in this following paper:
Evidence of Early Plate Tectonics
Once sufficient oxygenation of the earth's mantle and atmosphere was finally accomplished, higher life forms could finally be introduced on earth. Moreover, scientists find the rise in oxygen percentages in the geologic record to correspond exactly to the sudden appearance of large animals in the fossil record that depend on those particular percentages of oxygen to be present. The geologic record shows a 10% oxygen level at the time of the Cambrian explosion of higher life-forms in the fossil record some 540 million years ago. The geologic record also shows a strange and very quick rise from the 17% oxygen level, of 50 million years ago, to a 23% oxygen level 40 million years ago (Falkowski 2005, 2008). This strange rise in oxygen levels corresponds exactly to the abrupt appearance of large mammals in the fossil record who depend on those high oxygen levels. Interestingly, for the last 10 million years the oxygen percentage has been holding steady around 21%. 21% happens to be a 'very comfortable' percentage for humans to exist. If the oxygen level was only a few percentage lower, large mammals would become severely hampered in their ability to metabolize energy; if only a few percentage higher, there would be uncontrollable outbreaks of fire across the land (Denton; Nature's Destiny).
Composition Of Atmosphere - Pie Chart and Percentages:
The interplay of the biogeochemical (life and earth) processes that produce this balanced. life enabling, oxygen rich, atmosphere are very complex:
The Life and Death of Oxygen - 2008
Excerpt: “The balance between burial of organic matter and its oxidation appears to have been tightly controlled over the past 500 million years.” “The presence of O2 in the atmosphere requires an imbalance between oxygenic photosynthesis and aerobic respiration on time scales of millions of years hence, to generate an oxidized atmosphere, more organic matter must be buried (by tectonic activity) than respired.” - Paul Falkowski
The Oxygen and Carbon Dioxide Cycle - video
This following article and video clearly indicate that the life sustaining balanced symbiosis of the atmosphere is far more robust, as to tolerating man's industrial activities, than Global Warming alarmist would have us believe:
Earth's Capacity To Absorb CO2 Much Greater Than Expected: Nov. 2009
A Really Inconvenient Truth!
Global Warming Apocalypse? No! - video
Because of this basic chemical requirement of complex photosynthetic bacterial life establishing and helping maintain the proper oxygen levels necessary for higher life forms on any earth-like planet, this gives us further reason to strongly believe the earth is extremely unique in its ability to support intelligent life in this universe. What is more remarkable is that this balance for the atmosphere is maintained through complex symbiotic relationships with other bacteria, all of which are intertwined in very complex geochemical processes. This is irreducible complexity stacked on top of irreducible complexity!!! All of these studies of early life, and processes, on early earth fall directly in line with the anthropic hypothesis and have no rational explanation, from any materialistic theory based on blind chance, as to why all the first types of bacterial life found in the fossil record would suddenly, from the very start of their appearance on earth, start working in precise harmony with each other, and with geology, to prepare the earth for future life to appear. Nor can materialism explain why once these complex bacterial-geological processes had helped prepare the earth for higher life forms, they continue to work in precise harmony with each other to help maintain the proper balanced conditions that are of primary benefit for the higher life that is above them:
The Microbial Engines That Drive Earth’s Biogeochemical Cycles - Falkowski 2008
Excerpt: Microbial life can easily live without us; we, however, cannot survive without the global catalysis and environmental transformations it provides. - Paul G. Falkowski - Professor Geological Sciences - Rutgers
Biologically mediated cycles for hydrogen, carbon, nitrogen, oxygen, sulfur, and iron - image of interdependent 'biogeochemical' web
Interestingly, when Dr. Ross factors in the probability for 'simple' bacterial life randomly happening in this universe, which is necessary for more advanced life to exist on any planet in the first place, the probability for a planet which can host life explodes into gargantuan proportions:
Does the Probability for ETI = 1?
Excerpt: In another book I wrote with Fuz, Who Was Adam?, we describe calculations done by evolutionary biologist Francisco Ayala and by astrophysicists John Barrow, Brandon Carter, and Frank Tipler for the probability that a bacterium would evolve under ideal natural conditions—given the presumption that the mechanisms for natural biological evolution are both effective and rapid. They determine that probability to be no more than 10-24,000,000.
The bottom line is that rather than the probability for extraterrestrial intelligent life being 1 as Aczel claims, very conservatively from a naturalistic perspective it is much less than 10^500 + 22 -1054 -100,000,000,000 -24,000,000. That is, it is less than 10-100,024,000,532. In longhand notation it would be 0.00 … 001 with 100,024,000,531 zeros (100 billion, 24 million, 5 hundred and thirty-one zeros) between the decimal point and the 1. That longhand notation of the probability would fill over 20,000 complete Bibles. (As far as scientific calculations are concerned, determining how close a probability is to zero, only Penrose's 1 in 10^10^123 calculation, for the initial phase-space of the universe, is closer)
Anthropic Principle: A Precise Plan for Humanity By Hugh Ross
At least one scientist is far more pessimistic about the 'natural' future lifespan of the human race than 20,000 years:
Humans will be extinct in 100 years says eminent scientist - June 2010
This following study, of a vital enzyme found in all life, conforms to the notion of 'terraforming' the toxic primordial earth, as well I would argue that the enzyme conforms to the principle of 'Genetic Entropy' since the enzyme was reconstructed from the data of many different 'derived' enzymes;
Enzymes Complex from the Get-go
Excerpt: “Given the ancient origin of the reconstructed thioredoxin enzymes (a vital enzyme found in all living cells), with some of them predating the buildup of atmospheric oxygen, we expected their catalytic chemistry to be simple," said Fernandez. "Instead we found that enzymes that existed in the Precambrian era up to four billion years ago possessed many of the same chemical mechanisms observed in their modern-day relatives.”,, Further examination of the ancient enzymes revealed some striking features: The enzymes were highly resistant to temperature and were active in more acidic conditions. The findings suggest that the species hosting these ancient enzymes thrived in very hot environments that since then have progressively cooled down, and that they lived in oceans that were more acidic than today.
Though it is impossible to reconstruct the DNA of the earliest bacteria fossils, scientists find in the fossil record, and compare them to their descendants of today, there are many ancient bacteria spores recovered and 'revived' from salt crystals and amber crystals which have been compared to their living descendants of today. Some bacterium spores, in salt crystals, dating back as far as 250 million years have been revived, had their DNA sequenced, and compared to their offspring of today (Vreeland RH, 2000 Nature). To the disbelieving shock of many evolutionary scientists, both ancient and modern bacteria were found to have the almost same exact DNA sequence.
The Paradox of the "Ancient" (250 Million Year Old) Bacterium Which Contains "Modern" Protein-Coding Genes:
Evolutionists were so disbelieving at this stunning lack of change, far less change than was expected from the neo-Darwinian view, that they insisted the stunning similarity was due to modern contamination in Vreeland's experiment. Yet the following study laid that objection to rest by verifying that Dr. Vreeland's methodology for extracting ancient DNA was solid and was not introducing contamination because the DNA sequences this time around were completely unique:
World’s Oldest Known DNA Discovered (419 million years old) - Dec. 2009
Excerpt: But the DNA was so similar to that of modern microbes that many scientists believed the samples had been contaminated. Not so this time around. A team of researchers led by Jong Soo Park of Dalhousie University in Halifax, Canada, found six segments of identical DNA that have never been seen before by science. “We went back and collected DNA sequences from all known halophilic bacteria and compared them to what we had,” Russell Vreeland of West Chester University in Pennsylvania said. “These six pieces were unique",,,
These following studies, by Dr. Cano on ancient bacteria, preceded Dr. Vreeland's work:
“Raul J. Cano and Monica K. Borucki discovered the bacteria preserved within the abdomens of insects encased in pieces of amber. In the last 4 years, they have revived more than 1,000 types of bacteria and microorganisms — some dating back as far as 135 million years ago, during the age of the dinosaurs.,,, In October 2000, another research group used many of the techniques developed by Cano’s lab to revive 250-million-year-old bacteria from spores trapped in salt crystals. With this additional evidence, it now seems that the “impossible” is true.”
Dr. Cano and his former graduate student Dr. Monica K. Borucki said that they had found slight but significant differences between the DNA of the ancient, 25-40 million year old amber-sealed Bacillus sphaericus and that of its modern counterpart,(thus ruling out that it is a modern contaminant, yet at the same time confounding materialists, since the change is not nearly as great as evolution's 'genetic drift' theory requires.)
30-Million-Year Sleep: Germ Is Declared Alive
Dr. Cano's work on ancient bacteria came in for intense scrutiny since it did not conform to Darwinian predictions, and since people found it hard to believe you could revive something that was millions of years old. Yet Dr. Cano has been vindicated:
“After the onslaught of publicity and worldwide attention (and scrutiny) after the publication of our discovery in Science, there have been, as expected, a considerable number of challenges to our claims, but in this case, the scientific method has smiled on us. There have been at least three independent verifications of the isolation of a living microorganism from amber."
In reply to a personal e-mail from myself, Dr. Cano commented on the 'Fitness Test' I had asked him about:
Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.":
Fitness test which compared ancient bacteria to its modern day descendants, RJ Cano and MK Borucki
Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria. Here is a revisit to the video of the 'Fitness Test' that evolutionary processes have NEVER passed as for a demonstration of the generation of functional complexity/information above what was already present in a parent species bacteria:
Is Antibiotic Resistance evidence for evolution? - 'Fitness Test' - video
According to prevailing evolutionary dogma, there 'HAS' to be 'major genetic drift' to the DNA of modern bacteria from 250 million years ago, even though the morphology (shape) of the bacteria can be expected to remain exactly the same. In spite of their preconceived materialistic bias, scientists find there is no significant genetic drift from the ancient DNA. In fact recent research, with bacteria which are alive right now, has also severely weakened the 'genetic drift' argument of evolutionists:
The consequences of genetic drift for bacterial genome complexity - Howard Ochman - 2009
Excerpt: The increased availability of sequenced bacterial genomes allows application of an alternative estimator of drift, the genome-wide ratio of replacement to silent substitutions in protein-coding sequences. This ratio, which reflects the action of purifying selection across the entire genome, shows a strong inverse relationship with genome size, indicating that drift promotes genome reduction in bacteria.
I find it interesting that the materialistic theory of evolution expects there to be a significant amount of genetic drift from the DNA of ancient bacteria to its modern descendants, while the morphology can be allowed to remain exactly the same with its descendants. Alas for the atheistic materialist once again, the hard evidence of ancient DNA has fell in line with the anthropic hypothesis.
Many times a materialist will offer what he considers conclusive proof for evolution by showing bacteria that have become resistant to a certain antibiotic such as penicillin. Yet upon close inspection, once again this 'conclusive proof' dissolves away. All observed instances of 'beneficial' adaptations of bacteria to new antibiotics have been shown to be the result of degradation of preexisting molecular abilities:
List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria:
Moreover it is shown that nothing new has evolved since ancient bacteria have the very same ability to developed resistance to antibiotics as modern strains do:
Antibiotic resistance is ancient - September 2011
Evolution - Tested And Falsified - Don Patton - video
The following is a reflection on the true implications of the 'evolution' of bacteria becoming resistant to multiple antibiotics that has many people concerned as to its danger:
Superbugs not super after all
MRSA - Supergerms Do they prove evolution?
In places that are exposed to dirt from the street—such as your house—the supergerms are kept in their place not by powerful drugs and poisons but by competition with other germs. And their resistance genes are diluted by genes of the susceptible or non-resistant germs of the same species rather than being concentrated by selective breeding. That is why most non-hospital infections respond readily to antibiotics—the drug kills most of the germs, the body takes care of the rest. If it were not so, the so called supergerms would escape from hospitals and sweep the world.
Are You Too Clean? - New Studies Suggest Getting A Little Dirty May Be Just What The Doctor Ordered - December 2010
For materialists to conclusively prove evolution they would have to violate the principle of Genetic Entropy by clearly demonstrating a gain of functional information bits (Fits) over the parent species (Abel - Null-Hypothesis) in the fitness test which I've listed previously. Materialists have not done so, nor will they ever. The staggering interrelated complexity for the integrated whole of a distinct 'kind' of life-form simply will not allow the generation of complex functional information above the parent species to happen in its genome by chance alone. (Sanford, Genetic Entropy 2005)
This following site highlights the problem that the integrated complexity of a genome presents for Neo-Darwinism mechanism of random mutation:
Poly-Functional Complexity equals Poly-Constrained Complexity
This following quote reiterates the principle that material processes cannot generate functional information:
“There is no known law of nature, no known process and no known sequence of events which can cause information to originate by itself in matter.” Werner Gitt, “In the Beginning was Information”, 1997, p. 106. (Dr. Gitt was the Director at the German Federal Institute of Physics and Technology) His challenge to scientifically falsify this statement has remained unanswered since first published.
Some materialists believe they have conclusive proof for evolution because bacteria can quickly adapt to detoxify new man-made materials, such as nylon, even though it is, once again, just a minor variation within kind, i.e. though the bacteria adapt they still do not demonstrate a gain in fitness over the parent strain once the nylon is consumed (Genetic Entropy). I’m not nearly as impressed with their 'stunning proof' as they think I should be. In fact recent research has shown the correct explanation for the nylon-eating enzyme, produced on the plasmids, seems to be a special mechanism which recombines parts of the genes in the plasmids in a way that is non-random. This is shown by the absence of stop codons, which would be generated if the variation were truly random. The 'clockwork' repeatability of the adaptation clearly indicates a designed mechanism that fits perfectly within the limited 'variation within kind' model of Theism, and stays well within the principle of Genetic Entropy since the parent strain is still more fit for survival once the nylon is consumed from the environment. (Answers In Genesis)
Nylon Degradation – Analysis of Genetic Entropy
Excerpt: At the phenotypic level, the appearance of nylon degrading bacteria would seem to involve “evolution” of new enzymes and transport systems. However, further molecular analysis of the bacterial transformation reveals mutations resulting in degeneration of pre-existing systems.
Why Scientists Should NOT Dismiss Intelligent Design - William Dembski
Excerpt: "the nylonase enzyme seems “pre-designed” in the sense that the original DNA sequence was preadapted for frame-shift mutations to occur without destroying the protein-coding potential of the original gene. Indeed, this protein sequence seems designed to be specifically adaptable to novel functions."
Though Darwinists love to claim this as a 'new' protein. The simple fact is that is the same exact enzyme/protein, esterase, with only a minor variation on its previous enzymatic activity:
“Mutational analysis of 6-aminohexanoate-dimer hydrolase:
Relationship between nylon oligomer hydrolytic and esterolytic activities”
Excerpt: “Based upon the following findings, we propose that the nylon oligomer hydrolase has newly evolved through amino acid substitutions in the catalytic cleft of a pre-existing esterase with the b-lactamase-fold”.
Taku Ohkia, Yoshiaki Wakitania, Masahiro Takeoa, Kengo Yasuhiraa, Naoki Shibatab,
Yoshiki Higuchib, Seiji Negoroa FEBS Letters 580 (2006) 5054–5058
In fact it is now strongly suspected that all changes in the genome, which are deemed to be 'beneficial', are now found to be 'designed' changes that still stay within the overriding principle of Genetic Entropy:
Revisiting The Central Dogma (Of Evolution) In The 21st Century - James Shapiro - 2008
Excerpt: Genetic change is almost always the result of cellular action on the genome (not replication errors). (of interest - 12 methods of 'epigenetic' information transfer in the cell are noted in the paper)
Scientists Discover What Makes The Same Type Of Cells Different - Oct. 2009
Excerpt: Until now, cell variability was simply called “noise”, implying statistical random distribution. However, the results of the study now show that the different reactions are not random, but that certain causes (environmental clues) lead to predictable distribution patterns,,,
Bacteria 'Invest' (Designed) Wisely to Survive Uncertain Times, Scientists Report - Dec. 2009
De Novo Genes: - Cornelius Hunter - Nov. 2009
Excerpt: Cells have remarkable adaptation capabilities. They can precisely adjust which segments of the genome are copied for use in the cell. They can edit and regulate those DNA copies according to their needs. And they can even modify the DNA itself, such as with adaptive mutations,,,,One apparent de novo gene is T-urf13 which was found in certain varieties of corn.
The secrets of intelligence lie within a single cell - April 2010
This overriding truth of never being able to violate the Genetic Entropy of poly-constrained information by natural means applies to the 'non-living realm' of viruses, such as bird flu and HIV, as well:
Ryan Lucas Kitner, Ph.D. 2006. - Bird Flu
Excerpt: influenza viruses do possess a certain degree of variability; however, the amount of genetic information which a virus can carry is vastly limited, and so are the changes which can be made to its genome before it can no longer function.
As well, the virus is far more complex than many people have ever imagined, as this following video clearly points out:
Virus - Assembly Of A Nano-Machine - video
Though most people think of viruses as being very harmful to humans, the fact is that the Bacteriophage (Bacteria Eater) virus, in the preceding video, is actually a very beneficial virus to man for it is one of the main mechanisms found in nature by which bacteria populations are kept in check so as to keep them from 'overpopulating' the world. If bacteria did not have such mechanisms keeping them in check, the effect on the environment of the earth would soon throw the entire ecology of the planet into chaos, thus making the earth inhospitable for higher life forms.
Michael Behe defends the one 'overlooked' protein/protein binding site generated by the HIV virus, that Abbie Smith and Ian Musgrave had found, by pointing out it is well within the 2 binding site limit he set in "The Edge Of Evolution" on this following site:
Response to Ian Musgrave's "Open Letter to Dr. Michael Behe," Part 4
"Yes, one overlooked protein-protein interaction developed, leading to a leaky cell membrane --- not something to crow about after 10^20 replications and a greatly enhanced mutation rate."
An information-gaining mutation in HIV? NO!
In fact, I followed this debate very closely and it turns out the trivial gain of just one protein-protein binding site being generated for the non-living HIV virus, that the evolutionists were 'crowing' about, came at a staggering loss of complexity for the living host it invaded (People) with just that one trivial gain of a 'leaky cell membrane' in binding site complexity. Thus the 'evolution' of the virus clearly stayed within the principle of Genetic Entropy since far more functional complexity was lost by the living human cells it invaded than was ever gained by the non-living HIV virus. A non-living virus which depends on those human cells to replicate in the first place. Moreover, while learning HIV is a 'mutational powerhouse' which greatly outclasses the 'mutational firepower' of the entire spectrum of higher life-forms combined for millions of years, and about the devastating effect HIV has on humans with just that one trivial binding site being generated, I realized if evolution were actually the truth about how life came to be on Earth then the only 'life' that would be around would be extremely small organisms with the highest replication rate, and with the most mutational firepower, since only they would be the fittest to survive in the dog eat dog world where blind pitiless evolution rules and only the 'fittest' are allowed to survive.
Dr. Meyer makes a interesting comment here about simple self-replicating molecules which got simpler very quickly by neo-Darwinian processes;
In a classic experiment, Spiegelman in 1967 showed what happens to a molecular replicating system in a test tube, without any cellular organization around it. … these initial templates did not stay the same; they were not accurately copied. They got shorter and shorter until they reached the minimal size compatible with the sequence retaining self-copying properties. And as they got shorter, the copying process went faster. - Stephen Meyer - The Nature of Nature: Examining the Role of Naturalism in Science (Wilmington, DE: ISI Books, 2011), p. 313–18.
This following link has a nice overview of the self-replicating experiment in 1967 by Spiegelman in which the replicating molecule got simpler;
Origins of Life – Freeman Dyson – page 74
Here is a defence of Dr. Behe's binding site limit from the T-urf13 gene/protein that was argued, by Darwinists, to be a 'new' gene/protein that refuted Behe's limit:
How Arthur Hunt Fails To Refute Behe (T-URF13)- Jonathan M - February 2011
On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe
Excerpt: furthermore, T-urf 13 involves a kind of degradation of maize. In the case of the Texas maize–hence the T—the T-urf 13 was located by researchers because it was there that the toxin that decimated the corn grown in Texas in the late 60′s attached itself. So the “manufacturing” of this “de novo” gene proved to make the maize less fit. This is in keeping with Behe’s latest findings.
I would also like to point out scientists have never changed any one type of single cell organism, bacteria, or virus, into any other type of single cell organism, bacteria, or virus, despite years of exhaustive experimentation trying to change them. In fact, it is commonly known the further scientists deviate any particular single cell organism, bacteria, or virus, type from its original state, the more unfit for survival the manipulated population will quickly become (Genetic Entropy). As former president of the French Academy of Sciences Pierre P. Grasse has stated:
“What is the use of their unceasing mutations, if they do not change? In sum, the mutations of bacteria and viruses are merely hereditary fluctuations around a median position; a swing to the right, a swing to the left, but no final evolutionary effect.”
As well, to reiterate what was said in another article I listed previously, bacteria that are resistant to multiple antibiotics (MRSA) are actually superwimps instead of supergerms. This is because the multiple deleterious mutations they have incurred, from their interaction with different antibiotics, make them dramatically less fit for survival in the wild than their non-mutated cousins:
Superbugs not super after all
NDM-1 Superbug the Result of Bad Policies, Not Compelling Evidence for Evolution's Creative Powers - Sept. 2010
'Random mutations', though touted as this great engine of creativity by evolutionists, are in fact a pitiful mechanism to explain the generation of the functional information that we find in life, as this following references show:
Unexpectedly small effects of mutations in bacteria bring new perspectives - November 2010
Excerpt: Most mutations in the genes of the Salmonella bacterium have a surprisingly small negative impact on bacterial fitness. And this is the case regardless whether they lead to changes in the bacterial proteins or not.,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed.
Random Mutations Destroy Information - Perry Marshall - video
Random Mutations and the Heroics of Evolution
Excerpt: A child once informed his friends his toy bulldozer could dig all the way through the Earth. But wasn’t the Earth too big? No, look at the Grand Canyon—it is proof of what such small shovels can do. Such childish logic, amazingly, shows up repeatedly in evolutionary “theory.”
Michael Behe's Blog - October 2007
Excerpt: As I showed for mutations that help in the human fight against malaria, many beneficial mutations actually are the result of breaking or degrading a gene. Since there are so many ways to break or degrade a gene, those sorts of beneficial mutations can happen relatively quickly. For example, there are hundreds of different mutations that degrade an enzyme abbreviated G6PD, which actually confers some resistance to malaria. Those certainly are beneficial in the circumstances. The big problem for evolution, however, is not to degrade genes (Darwinian random mutations can do that very well!) but to make the coherent, constructive changes needed to build new systems.
Materialists simply do not have any evidence for the truly 'beneficial' mutations they need to make evolution work. The following site has numerous quotes, studies and videos which reveal the overwhelmingly negative mutation rate which has been found in life:
Mutation Studies, Videos, And Quotes
It is also interesting to note that scientists have actually used a mechanism of 'excessive mutations' to help humans in their fight against pathogenic viruses, as the following articles clearly point out:
GM Crops May Face Genetic Meltdown
Excerpt: Error catastrophe occurs when high mutation rates give rise to so many deleterious mutations that they make the population go extinct. For example, foot and mouth disease virus treated with mutagens (base analogues fluorouracil and azacytidine) eventually become extinct [1]. Polio virus treated with the mutagenic drug ribavirin similarly went extinct [2].
Quasispecies Theory and the Behavior of RNA Viruses - July 2010
Excerpt: Many predictions of quasispecies theory run counter to traditional views of microbial behavior and evolution and have profound implications for our understanding of viral disease. ,,, it has been termed “mutational meltdown.” It is now clear that many RNA viruses replicate near the error threshold. Early studies with VSV showed that chemical mutagens generally reduced viral infectivity, and studies with poliovirus clearly demonstrated that mutagenic nucleoside analogs push viral populations to extinction [40]–[43]. The effect is dramatic—a 4-fold increase in mutation rate resulted in a 95% reduction in viral titer.,,, While mutation-independent activities have also been identified, it is clear that APOBEC-mediated lethal mutagenesis is a critical cellular defense against RNA viruses. The fact that these pathogens replicate close to the error threshold makes them particularly sensitive to slight increases in mutational load.,,,
In fact, trying to narrow down an actual hard number for the truly beneficial mutation rate, that would actually explain the massively integrated machine-like complexity of proteins we find in life, is what Dr. Behe did in this following book:
The Edge Of Evolution - Michael Behe - Video Lecture
The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have "invented" little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155).
Dr. Behe states in The Edge of Evolution on page 135:
"Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would actually explain the generation of the complex molecular machinery we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite."
Where's the substantiating evidence for neo-Darwinism?
Richard Dawkins’ The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe - Oct. 2009
Excerpt: The rarity of chloroquine resistance is not in question. In fact, Behe’s statistic that it occurs only once in every 10^20 cases was derived from public health statistical data, published by an authority in the Journal of Clinical Investigation. The extreme rareness of chloroquine resistance is not a negotiable data point; it is an observed fact.
Antimalarial drug resistance - Nicholas J. White 1,2
Excerpt: Resistance to chloroquine in P. falciparum has arisen spontaneously less than ten times in the past fifty years (14). This suggests that the per-parasite probability of developing resistance de novo is on the order of 1 in 10^20 parasite multiplications. The single point mutations in the gene encoding cytochrome b (cytB), which confer atovaquone resistance, or in the gene encoding dihydrofolate reductase (dhfr), which confer pyrimethamine resistance, have a per-parasite probability of arising de novo of approximately 1 in 10^12 parasite multiplications (5). To put this in context, an adult with approximately 2% parasitemia has 10^12 parasites in his or her body. But in the laboratory, much higher mutation rates thane 1 in every 10^12 are recorded (12).
An Atheist Interviews Michael Behe About "The Edge Of Evolution" - video
Thus, the actual rate for 'truly' beneficial mutations, which would account for the staggering machine-like complexity we see in life, is far in excess of one-hundred-billion-billion mutational events. So the one in a thousand, to one in a million, number for 'truly' beneficial mutations is actually far, far, too generous for an estimate for evolutionists to use as an estimate in their 'hypothetical' calculations for beneficial mutations.
In fact, from consistent findings such as these, it is increasingly apparent the principle of Genetic Entropy is the overriding foundational rule for all of biology, with no exceptions at all, and belief in 'truly' beneficial mutations is nothing more than wishful speculation on the materialist part which has no foundation in empirical science whatsoever.
Evolution vs. Genetic Entropy - video
The following article has a simple example of how Genetic Entropy plays out, even allowing that some mutations might truly be slightly beneficial in the molecular sense as far as molecular functionality is concerned:
Richard Lenski’s Long-Term Evolution Experiments with E. coli and the Origin of New Biological Information
Excerpt: Even if there were several possible pathways by which to construct a gain-of-FCT mutation, or several possible kinds of adaptive gain-of-FCT features, the rate of appearance of an adaptive mutation that would arise from the diminishment or elimination of the activity of a protein is expected to be 100-1000 times the rate of appearance of an adaptive mutation that requires specific changes to a gene.
(Michael J. Behe, “Experimental Evolution, Loss-of-Function Mutations and ‘The First Rule of Adaptive Evolution’,” Quarterly Review of Biology, Vol. 85(4) (December, 2010).)
The sort of loss-of-function examples seen in the Lenski's LTEE (Long Term Evolution Experiment) will never show that natural selection can increase high CSI. To understand why, imagine the following hypothetical situation.
Consider an imaginary order of insects, the Evolutionoptera. Let’s say there are 1 million species of Evolutionoptera, but ecologists find that the extinction rate among Evolutionoptera is 1000 species per millennium. The speciation rate (the rate at which new species arise) during the same period is 1 new species per 1000 years. At these rates, every thousand years 1000 species of Evolutionoptera will die off, while one new species will develop–a net loss of 999 species. If these processes continue, in 1,000,001 years there will be no species of Evolutionoptera left on earth.
More Darwinian Degradation: Much Ado about Yeast - Michael Behe - January 2012
The foundational overriding principle, in life sciences, for explaining the sub-speciation of all species from any particular initial parent species that was designed is Genetic Entropy. Genetic Entropy, a rule which draws its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), and the principle can be stated something like this:
"All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome."
Genetic Entropy also fits very well with the theological question that many children ask their teachers, 'Why would a loving God allow pathogenic viruses and bacteria to exist?'
What about parasites? - September 2010
Excerpt: these parasites must have been benign and beneficial in their original form. Perhaps some were independent and free-living, and others had beneficial symbiotic relationships with animals or humans. ,,,These once-harmless creatures degenerated, and became parasitic and harmful.
The following shows that we can actually watch the 'final act' of Genetic Entropy, 'mutational meltdown', in the laboratory for small asexual populations (bacteria, yeast, etc.):
The Mutational Meltdown in Asexual Populations - Lynch
Excerpt: Loss of fitness due to the accumulation of deleterious mutations appears to be inevitable in small, obligately asexual populations, as these are incapable of reconstituting highly fit genotypes by recombination or back mutation. The cumulative buildup of such mutations is expected to lead to an eventual reduction in population size, and this facilitates the chance accumulation of future mutations. This synergistic interaction between population size reduction and mutation accumulation leads to an extinction process known as the mutational meltdown,,,
These following articles refute Richard E. Lenski's 'supposed evolution' of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli from his 'Long Term Evolution Experiment' (LTEE) which has been going on since 1988:
Multiple Mutations Needed for E. Coli - Michael Behe
Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we've seen evolution do, then there's no reason to believe evolution could produce many of the complex biological features we see in the cell.
Michael Behe's Quarterly Review of Biology Paper Critiques Richard Lenski's E. Coli Evolution Experiments - December 2010
Excerpt: After reviewing the results of Lenski's research, Behe concludes that the observed adaptive mutations all entail either loss or modification--but not gain--of Functional Coding ElemenTs (FCTs)
Richard Lenski's Long-Term Evolution Experiments with E. coli and the Origin of New Biological Information - September 2011
Excerpt: The results of future work aside, so far, during the course of the longest, most open-ended, and most extensive laboratory investigation of bacterial evolution, a number of adaptive mutations have been identified that endow the bacterial strain with greater fitness compared to that of the ancestral strain in the particular growth medium. The goal of Lenski's research was not to analyze adaptive mutations in terms of gain or loss of function, as is the focus here, but rather to address other longstanding evolutionary questions. Nonetheless, all of the mutations identified to date can readily be classified as either modification-of-function or loss-of-FCT.
(Michael J. Behe, "Experimental Evolution, Loss-of-Function Mutations and 'The First Rule of Adaptive Evolution'," Quarterly Review of Biology, Vol. 85(4) (December, 2010).)
Lenski's e-coli - Analysis of Genetic Entropy
Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment.
Genetic Entropy Confirmed (in Lenski's e-coli) - June 2011
Excerpt: No increases in adaptation or fitness were observed, and no explanation was offered for how neo-Darwinism could overcome the downward trend in fitness.
Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations)
The preceding experiment was interesting, for they found, after 50,000 generations of e-coli which is equivalent to about 1,000,000 years of 'supposed' human evolution, only 5 'beneficial' mutations. Moreover, these 5 'beneficial' mutations were found to interfere with each other when they were combined in the ancestral population. Needless to say, this is far, far short of the functional complexity we find in life that neo-Darwinism is required to explain the origination of. Even more problematic for neo-Darwinism is when we realize that Michael Behe showed that the 'beneficial' mutations were actually loss or modification of function mutations. i.e. The individual 'beneficial' mutations were never shown to be in the process of building functional complexity at the molecular level in the first place!
Moreover, Lenski's work has also shown that 'convergent evolution' is impossible because his work has shown that evolution is 'historically contingent'. This following video and article make this point clear:
Lenski's Citrate E-Coli - Disproof of Convergent Evolution - Fazale Rana - video (the disproof of convergence starts at the 2:45 minute mark of the video)
The Long Term Evolution Experiment - Analysis
The loss of 'convergent evolution', as a argument for molecular sequence similarity in widely divergent species, is a major blow to neo-Darwinian story telling:
Implications of Genetic Convergent Evolution for Common Descent - Casey Luskin - Sept. 2010
Origin of Hemoglobins: A Repeated Problem for Biological Evolution - 2010
Excerpt: When analyzed from an evolutionary perspective, it appears as if the hemoglobins originated independently in jawless vertebrates and jawed vertebrates.,,, This result fits awkwardly within the evolutionary framework. It also contradicts the results of the Long-term Experimental Evolution (LTEE; Lenski) study, which demonstrated that microevolutionary biochemical changes are historically contingent.
Convergence: Evidence for a Single Creator - Fazale Rana
Excerpt: When critically assessed, the evolutionary paradigm is found to be woefully inadequate when accounting for all the facets of biological convergence. On the other hand, biological convergence is readily explained by an origins model that evokes a single Creator (reusing optimal designs).
Bernard d'Abrera on Butterfly Mimicry and the Faith of the Evolutionist - October 2011
Excerpt: For it to happen in a single species once through chance, is mathematically highly improbable. But when it occurs so often, in so many species, and we are expected to apply mathematical probability yet again, then either mathematics is a useless tool, or we are being criminally blind.,,, Evolutionism (with its two eldest daughters, phylogenetics and cladistics) is the only systematic synthesis in the history of the universe (science) that proposes an Effect without a Final Cause. It is a great fraud, and cannot be taken seriously because it outrageously attempts to defend the philosophically indefensible.
Lenski's work also conforms to the extreme limit found for just two 'coordinated' mutations conferring any 'evolutionary benefit';
Michael Behe on the most recent Richard Lenski “evolvability” paper - April 2011
More from Lenski's Lab, Still Spinning Furiously - Michael Behe - January, 2012
Even more crushing evidence can be gleaned from Lenski's long term evolution experiment on E-coli. Upon even closer inspection, it seems Lenski's 'cuddled' E. coli are actually headed for genetic meltdown instead of evolving into something, anything, better.
New Work by Richard Lenski:
Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal.
Sometimes a materialist will say, "gene duplication is the real engine of evolution" which generates the new functional information in molecular biology. Yet they simply don't have any evidence to support that assertion:
Gene Duplication Quotes and Papers
Michael Behe, The Edge of Evolution, pg. 162 Swine Flu, Viruses, and the Edge of Evolution
"Indeed, the work on malaria and AIDS demonstrates that after all possible unintelligent processes in the cell--both ones we've discovered so far and ones we haven't--at best extremely limited benefit, since no such process was able to do much of anything. It's critical to notice that no artificial limitations were placed on the kinds of mutations or processes the microorganisms could undergo in nature. Nothing--neither point mutation, deletion, insertion, gene duplication, transposition, genome duplication, self-organization nor any other process yet undiscovered--was of much use."
Again I would like to emphasize, I’m not arguing Darwinism cannot make complex functional systems, the data on malaria, and the other examples, are a observation that it does not. In science observation beats theory all the time. So Professor (Richard) Dawkins can speculate about what he thinks Darwinian processes could do, but in nature Darwinian processes have not been shown to do anything in particular.
Michael Behe - 46 minute mark of video lecture on 'The Edge of Evolution' for C-SPAN
Experimental evolution, loss-of-function mutations, and “the first rule of adaptive evolution” - Michael J. Behe - December 2010
Excerpt: In this paper, I review molecular changes underlying some adaptations, with a particular emphasis on evolutionary experiments with microbes conducted over the past four decades. I show that by far the most common adaptive changes seen in those examples are due to the loss or modification of a pre-existing molecular function, and I discuss the possible reasons for the prominence of such mutations.
Mike Behe on a new journal paper admitting that Darwinian evolution 'can’t do' complex systems - August 2011
The following experiment recently confirmed the severe limit for evolution found by Dr Behe:
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness - Ann K. Gauger, Stephanie Ebnet, Pamela F. Fahey, and Ralph Seelke – 2010
Excerpt: When all of these possibilities are left open by the experimental design, the populations consistently take paths that reduce expression of trpAE49V,D60N, making the path to new (restored) function virtually inaccessible. This demonstrates that the cost of expressing genes that provide weak new functions is a significant constraint on the emergence of new functions. In particular, populations with multiple adaptive paths open to them may be much less likely to take an adaptive path to high fitness if that path requires over-expression.
Response from Ralph Seelke to David Hillis Regarding Testimony on Bacterial Evolution Before Texas State Board of Education, January 21, 2009
Excerpt: He has done excellent work showing the capabilities of evolution when it can take one step at a time. I have used a different approach to show the difficulties that evolution encounters when it must take two steps at a time. So while similar, our work has important differences, and Dr. Bull’s research has not contradicted or refuted my own.
Epistasis between Beneficial Mutations - July 2011
Excerpt: We found that epistatic interactions between beneficial mutations were all antagonistic—the effects of the double mutations were less than the sums of the effects of their component single mutations. We found a number of cases of decompensatory interactions, an extreme form of antagonistic epistasis in which the second mutation is actually deleterious in the presence of the first. In the vast majority of cases, recombination uniting two beneficial mutations into the same genome would not be favored by selection, as the recombinant could not outcompete its constituent single mutations.
Behe and Snoke go even further, addressing the severe problems with the Gene Duplication scenario in this following study:
Interestingly Fred Hoyle arrived at the same conclusion, of a 2 amino acid limit, years earlier from a 'mathematical' angle:
The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations - Douglas D. Axe* - December 2010
quote of note: ,, the most significant implication comes not from how the two cases contrast but rather how they cohere—both showing severe limitations to complex adaptation. To appreciate this, consider the tremendous number of cells needed to achieve adaptations of such limited complexity. As a basis for calculation, we have assumed a bacterial population that maintained an effective size of 10^9 individuals through 10^3 generations each year for billions of years. This amounts to well over a billion trillion (10^21) opportunities (in the form of individuals whose lines were not destined to expire imminently) for evolutionary experimentation. Yet what these enormous resources are expected to have accomplished, in terms of combined base changes, can be counted on the fingers.
This following paper clearly reveals that there is a 'cost' to duplicate genes that further precludes the scenario from being plausible:
Experimental Evolution of Gene Duplicates in a Bacterial Plasmid Model
Excerpt: In a striking contradiction to our model, no such conditions were found. The fitness cost of carrying both plasmids increased dramatically as antibiotic levels were raised, and either the wild-type plasmid was lost or the cells did not grow. This study highlights the importance of the cost of duplicate genes and the quantitative nature of the tradeoff in the evolution of gene duplication through functional divergence.
This recent paper also found the gene duplication scenario to be highly implausible:
The Extinction Dynamics of Bacterial Pseudogenes - Kuo and Ochman - August 2010
Excerpt: "Because all bacterial groups, as well as those Archaea examined, display a mutational pattern that is biased towards deletions and their haploid genomes would be more susceptible to dominant-negative effects that pseudogenes might impart, it is likely that the process of adaptive removal of pseudogenes is pervasive among prokaryotes."
Many times evolutionists are very deceptive in saying that evolutionary processes can generate functional information, as with the duplicate gene scenario, when in fact no one has ever experimentally demonstrated a gain in functional information, above a parent species, that would violate the principle of genetic entropy. These following articles reveal some of the many elaborate ploys evolutionists have used in the past to try to deceive, yes deceive!, the public into thinking evolutionary processes can easily generate functional information:
Assessing the NCSE’s Citation Bluffs on the Evolution of New Genetic Information - Feb. 2010
Casey Luskin, in response to a comment from Nick Matzke claiming that there are many examples of neo-Darwinian processes creating functional information in life, points out the 'scientifically' vacuous nature of the vast majority of neo-Darwinian claims for the origin of new biological information here:
Excerpt: many papers which Nick (Matzke) would probably claim show the “origin of new genetic information” invoked natural selection, but then:
did not identify a stepwise mutational pathway,
did not identify what advantages might be gained at each step
did not calculate the plausibility of this pathway evolving under known population sizes, mutation rates, and other relevant probabilistic resources, and in many cases
Is it persuasive to invoke natural selection as the cause of new genetic information when you don’t even know what function is being selected? This is why I said that in many cases, natural selection is used as a “magic wand.” It’s just asserted, even though no one really knows what was going on.
How to Play the Gene Evolution Game - Casey Luskin - Feb. 2010
The NCSE, Judge Jones, and Bluffs About the Origin of New Functional Genetic Information - Casey Luskin - March 2010
To answer our fourth question (What evidence is found for the appearance of all species of life on earth, and is man the last species to appear on earth?) we come to the evidence found for the amazing variety of complex life on earth.
Cambrian Explosion thru Refutation Of Human Evolution
Anonymous said...
great blog! thanks!
Ilíon said...
Quoting Behe: "There is now considerable evidence that genes alone do not control development. For example when an egg's genes (DNA) are removed and replaced with genes (DNA) from another type of animal, development follows the pattern of the original egg until the embryo dies from lack of the right proteins. (The rare exceptions to this rule involve animals that could normally mate to produce hybrids.) ..."
I had long wondered about that. I had, in fact, strongly suspected that that is how it would be.
Anonymous said...
Useful material. Thanks. Do you have time to organize it?
kuartus said...
Hey BA, I think you might find this article interesting:
Direct measurement of the quantum wavefunction
Anonymous said...
outstanding post! I hate commenting and i dont usually do it but because i enjoyed this, what the heck! Thanks alot!:)
Anonymous said...
What i don't understood is if truth be told how you're now not really much more smartly-preferred than you might be right now.
You are very intelligent. You know thus considerably on the subject
of this subject, produced me for my part believe it from
numerous numerous angles. Its like women and men aren't interested except it is something to do with Girl gaga! Your individual stuffs great. Always maintain it up!
My blog ; bacalao |
7bee758a3739227b |
Monster waves blamed for shipping disasters
Stephen Ornes
An oil tanker heads into a monster wave.
An oil tanker heads into a monster wave. Photo: Digitally manipulated/Getty Images
Storm clouds were gathering as the boat ventured eastwards out of the port at around 1pm on March 3, 2010. The sea swell steadily increased during the first hours of the voyage, enough to test those with less-experienced sea legs, but still nothing out of the ordinary.
We now know that rogue waves can arise in every ocean. That casts historical accounts in a new lightand rogue waves are thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004.
A few decades ago, rogue waves of the sort that hit the Louis Majesty were the stuff of salty sea dogs’ legends. No more. Real-world observations, backed up by improved theory and lab experiments, leave no doubt any more that monster waves happen – and not infrequently. The question has become: can we predict when and where they will occur?
Science has been slow to catch up with rogue waves. There is not even any universally accepted definition. One with wide currency is that a rogue is at least double the significant wave height, itself defined as the average height of the tallest third of waves in any given region.
What this amounts to is a little dependent on context: on a calm sea with significant waves 10 centimetres tall, a wave of 20 centimetres might be deemed a rogue.
If that seems a little lackadaisical, for a long time the models oceanographers used to predict wave heights suggested anomalously tall waves barely existed. These models rested on the principle of linear superposition: that when two trains of waves meet, the heights of the peaks and troughs at each point simply sum.
It was only in the late 1960s that Thomas Brooke Benjamin and J.E. Feir of the University of Cambridge spotted an instability in the underlying mathematics. When longer-wavelength waves catch up with shorter-wavelength ones, all the energy of a wave train can become abruptly concentrated in a few monster waves – or just one. Longer waves travel faster in the deep ocean, so this is a perfectly plausible real-world scenario.
The pair went on to test the theory in a then state-of-the-art, 400-metre-long towing tank, complete with wave-maker, at the a UK National Physical Laboratory facility on the outskirts of London.
Near the wave-maker, which perturbed the water at varying speeds, the waves were uniform and civil. But about 60 metres on they became distorted, forming into short-lived, larger waves that we would now call rogues (though to avoid unwarranted splashing, the initial waves were just a few centimetres tall).
It took a while for this new intelligence to trickle through. “Waves become unstable and can concentrate energy on their own,” says Takuji Waseda, an oceanographer at the University of Tokyo in Japan. “But for a long time, people thought this was a theoretical thing that does not exist in the real oceans.”
Theory and observation finally crashed together in 1995 in the North Sea, about 150 kilometres off the coast of Norway. New Year’s Day that year was tumultuous around the Draupner sea platform, with a significant wave height of 12 metres.
At around 3.20pm, however, accelerometers and strain sensors mounted on the platform registered a single wave towering 26 metres over its surrounding troughs. According to the prevailing wisdom, this was a once-in-10,000-year occurrence.
The Draupner wave ushered in a new era of rogue-wave science, says physicist Ira Didenkulova at Tallinn University of Technology in Estonia. In 2000, the European Union initiated the three-year MaxWave project. During a three-week stretch early in 2003, it used boat-based radar and satellite data to scan the world’s oceans for giant waves, turning up 10 that were 25 metres or more tall.
We now know that rogue waves can arise in every ocean. The North Atlantic, the Drake Passage between Antarctica and the southern tip of South America, and the waters off the southern coast of South Africa are particularly prone. Rogues possibly also occur in some large freshwater bodies such as the Great Lakes of North America.
That casts historical accounts in a new light and rogue waves are now thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004.
So rogue waves exist, but what makes one in the real world?
Miguel Onorato at the University of Torino, Italy, has spent more than a decade trying to answer that question.
His tool is the non-linear Schrödinger equation, which has long been used to second-guess unpredictable situations in both classical and quantum physics. Onorato uses it to build computer simulations and guide wave-tank experiments in an attempt to coax rogues from ripples.
Gradually, Onorato and others are building up a catalogue of real-world rogue-generating situations. One is when a storm swell runs into a powerful current going the other way. This is often the case along the North Atlantic’s Gulf Stream, or where sea swells run counter to the Agulhas current off South Africa. Another is a “crossing sea”, in which two wave systems – often one generated by local winds and a sea swell from further afield – converge from different directions and create instabilities.
Crossing seas have long been a suspect. A 2005 analysis used data from the maritime information service Lloyd’s List Intelligence to show that, depending on the precise definition, up to half of ship accidents chalked up to bad weather occur in crossing seas.
In 2011, the finger was pointed at a crossing sea in the Draupner incident, and Onorato thinks it might also have been the Louis Majesty’s downfall. When he and his team fed wind and wave data into his model to “hindcast” the state of the sea in the area at the time, it indicated that two wave trains were converging on the ship, one from a north-easterly direction and one more from the south-east, separated by an angle of between 40 and 60 degrees.
Simpler situations might generate rogues, too. Last year, Waseda revisited an incident in December 1980 when a cargo carrier loaded with coal lost its entire bow to a monster wave with an estimated height of 20 metres in the “Dragon’s Triangle”, a region of the Pacific south of Japan that is notorious for accidents.
A Japanese government investigation had blamed a crossing sea, but when Waseda used a more sophisticated wave model to hindcast the conditions, he found it likely that a strong gale had poured energy into a single wave system far larger than conventional models allowed.
He thinks such single-system rogues could account for other accidents, too – and that
the models need further updating. “We used to think ocean waves could be described simply, but it turns out they’re changing at
the same pace and same time scale as the wind, which changes rapidly,” he says.
In 2012, Onorato and others showed that the models even allow for the possibility of “super rogues” towering as much as 11 times the height of the surrounding seas, a possibility since borne out in water-tank experiments.
With climate change potentially whipping up more intense storms, such theoretical possibilities are becoming a serious practical concern. From 2009 to 2013, the EU funded a project called Extreme Seas, which brought shipbuilders together with academic researchers including Onorato, with the aim of producing boats with hulls designed to withstand rogue waves.
That is a high-cost, long-term solution, however. The best defence remains simply knowing when a rogue wave is likely to strike. “We can at least warn that sea states are rapidly changing, possibly in a dangerous direction,” says Waseda.
Various indices have been developed that aim to convert raw satellite and sea-state data into this sort of warning. One of the most widely used is the Benjamin-Feir index, named after the two pioneers of rogue-wave research. Formulated in 2003 by Peter Janssen of the European Centre for Medium-Range Weather Forecasts in Reading, UK, it is calculated for sea squares 20 kilometres by 20 kilometres, and is now incorporated into the centre’s twice-daily sea forecasts.
“Ship routing officers use it as an indicator to see whether they should go through a particular area,” says Janssen.
The ultimate aim would be to allow ships to do that themselves. Most large ocean-going ships now carry wide-sweeping sensors that determine the heights of waves by analysing radar echoes.
Computer software can turn those radar measurements into a three-dimensional map of the sea state, showing the size and motions of the surrounding swell.
It would be a relatively small step to include software that can flag up indicators of a sea about to go rogue, such as quickly changing winds or crossing seas. Such a system might let crew and passengers avoid at-risk areas of a ship.
The main bar to that happening is computing power: existing models can’t quite crunch through all the fast-moving fluctuations of the ocean rapidly enough to generate fine-grained warnings in real time.
For Waseda, the answer is to develop a central early warning system, such as those that operate for tsunamis and tropical storms, to inform ships about to leave port. Thanks to our advances in understanding a phenomenon whose existence was doubted only decades ago, there is no reason now why we can’t do that for rogue waves, says Waseda.
“At this point it’s not a shortage of theory, but a shortage of communication.”
- New Scientist
Seven giants
In 2007, Paul Liu at the US National Oceanic and Atmospheric Administration compiled a catalogue of more than 50 historical incidents probably associated with rogue waves. Here are some of the most significant:
1498 Columbus recounts how, on his third expedition to the Americas, a giant wave lifts up his boats during the night as they pass through a strait near Trinidad. Supposedly using Columbus’s words, to this day this area of sea is called the Bocas del Dragón – the Mouths of the Dragon.
1853 The Annie Jane, a ship carrying 500 emigrants from England to Canada, is hit. Only about 100 make it to shore alive, to Vatersay, an island in Scotland’s Outer Hebrides.
1884 A rogue wave off West Africa sinks the Mignonette, a yacht sailing from England to Australia. The crew of four escape in a dinghy. After 19 days adrift, the captain kills the teenage cabin boy to provide food for the other three survivors.
1909 The steamship SS Waratah disappears without trace with over 200 people on board off the coast of South Africa – a swathe of sea now known for its high incidence of rogue waves.
1943 Two monster waves in quick succession pummel the Queen Elizabeth cruise liner as it crosses the North Atlantic, breaking windows 28 metres above the waterline.
1978 The German merchant navy supertanker MS München disappears in the stormy North Atlantic en route from Bremerhaven to Savannah, Georgia, leaving only a scattering of life rafts and emergency buoys.
2001 Just days apart, two cruise ships – the Bremen and the Caledonian Star – have their bridge windows smashed by waves estimated to be 30 metres tall in the South Atlantic.
- New Scientist
HuffPost Australia
Featured advertisers
Special offers
Credit card, savings and loan rates by Mozo |
7c9461cbf1ff2948 | Finding the Schrödinger Equation for the Hydrogen Atom
Using the Schrödinger equation tells you just about all you need to know about the hydrogen atom, and it's all based on a single assumption: that the wave function must go to zero as r goes to infinity, which is what makes solving the Schrödinger equation possible.
Hydrogen atoms are composed of a single proton, around which revolves a single electron. You can see how that looks in the following figure.
Note that the proton isn't at the exact center of the atom — the center of mass is at the exact center. In fact, the proton is at a radius of rp from the exact center, and the electron is at a radius of re.
The hydrogen atom.
The hydrogen atom.
So what does the Schrödinger equation, which will give you the wave equations you need, look like? Well, it includes terms for the kinetic and potential energy of the proton and the electron. Here's the term for the proton's kinetic energy:
Here, xp is the proton's x position, yp is the proton's y position, and zp is its z position.
The Schrödinger equation also includes a term for the electron's kinetic energy:
Here, xe is the electron's x position, ye is the electron's y position, and ze is its z position.
Besides the kinetic energy, you have to include the potential energy, V(r), in the Schrödinger equation, which makes the time-independent Schrödinger equation look like this:
is the electron and proton's wave function.
The electrostatic potential energy, V(r), for a central potential is given by the following formula, where r is the radius vector separating the two charges:
As is common in quantum mechanics, you use the CGS (centimeter-gram-second) system of units, where
So the potential due to the electron and proton charges in the hydrogen atom is
Note that r = rerp, so the preceding equation becomes
which gives you this Schrödinger equation:
• Add a Comment
• Print
• Share
blog comments powered by Disqus |
3d71f09e90766e59 | Density functional theory
From Wikipedia, the free encyclopedia
(Redirected from Density-functional theory)
Jump to: navigation, search
DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. In many cases the results of DFT calculations for solid-state systems agree quite satisfactorily with experimental data. Computational costs are relatively low when compared to traditional methods, such as Hartree–Fock theory and its descendants based on the complex many-electron wavefunction.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe intermolecular interactions, especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some other strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors.[1] Its incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms)[2] or where dispersion competes significantly with other effects (e.g. in biomolecules).[3] The development of new DFT methods designed to overcome this problem, by alterations to the functional and inclusion of additional terms to account for both core and valence electrons [4] or by the inclusion of additive terms,[5][6][7][8] is a current research topic.
Overview of method[edit]
Note: Recently, another foundation to construct the DFT without the Hohenberg–Kohn theorems is getting popular, that is, as a Legendre transformation from external potential to electron density. See, e.g., Density Functional Theory – an introduction, Rev. Mod. Phys. 78, 865–951 (2006), and references therein. A book, 'The Fundamentals of Density Functional Theory' written by H. Eschrig, contains detailed mathematical discussions on the DFT; there is a difficulty for N-particle system with infinite volume; however, we have no mathematical problems in finite periodic system (torus).
Derivation and formalism[edit]
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential V in which the electrons are moving. A stationary electronic state is then described by a wavefunction \Psi(\vec r_1,\dots,\vec r_N) satisfying the many-electron time-independent Schrödinger equation
\hat H \Psi = \left[{\hat T}+{\hat V}+{\hat U}\right]\Psi = \left[\sum_i^N \left(-\frac{\hbar^2}{2m_i}\nabla_i^2\right) + \sum_i^N V(\vec r_i) + \sum_{i<j}^N U(\vec r_i, \vec r_j)\right] \Psi = E \Psi
where, for the \ N -electron system, \hat H is the Hamiltonian, \ E is the total energy, \hat T is the kinetic energy, \hat V is the potential energy from the external field due to positively charged nuclei, and \hat U is the electron-electron interaction energy. The operators \hat T and \hat U are called universal operators as they are the same for any \ N -electron system, while \hat V is system dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term \hat U .
Here DFT provides an appealing alternative, being much more versatile as it provides a way to systematically map the many-body problem, with \hat U , onto a single-body problem without \hat U . In DFT the key variable is the particle density n(\vec r), which for a normalized \,\!\Psi is given by
n(\vec r) = N \int{\rm d}^3r_2 \cdots \int{\rm d}^3r_N \Psi^*(\vec r,\vec r_2,\dots,\vec r_N) \Psi(\vec r,\vec r_2,\dots,\vec r_N).
This relation can be reversed, i.e., for a given ground-state density n_0(\vec r) it is possible, in principle, to calculate the corresponding ground-state wavefunction \Psi_0(\vec r_1,\dots,\vec r_N). In other words, \,\!\Psi is a unique functional of \,\!n_0,[9]
\,\!\Psi_0 = \Psi[n_0]
and consequently the ground-state expectation value of an observable \,\hat O is also a functional of \,\!n_0
O[n_0] = \left\langle \Psi[n_0] \left| \hat O \right| \Psi[n_0] \right\rangle.
In particular, the ground-state energy is a functional of \,\!n_0
E_0 = E[n_0] = \left\langle \Psi[n_0] \left| \hat T + \hat V + \hat U \right| \Psi[n_0] \right\rangle
where the contribution of the external potential \left\langle \Psi[n_0] \left|\hat V \right| \Psi[n_0] \right\rangle can be written explicitly in terms of the ground-state density \,\!n_0
V[n_0] = \int V(\vec r) n_0(\vec r){\rm d}^3r.
More generally, the contribution of the external potential \left\langle \Psi \left|\hat V \right| \Psi \right\rangle can be written explicitly in terms of the density \,\!n,
V[n] = \int V(\vec r) n(\vec r){\rm d}^3r.
The functionals \,\!T[n] and \,\!U[n] are called universal functionals, while \,\!V[n] is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified \hat V, one then has to minimize the functional
E[n] = T[n]+ U[n] + \int V(\vec r) n(\vec r){\rm d}^3r
with respect to n(\vec r), assuming one has got reliable expressions for \,\!T[n] and \,\!U[n]. A successful minimization of the energy functional will yield the ground-state density \,\!n_0 and thus all other ground-state observables.
The variational problems of minimizing the energy functional \,\!E[n] can be solved by applying the Lagrangian method of undetermined multipliers.[12] First, one considers an energy functional that doesn't explicitly have an electron-electron interaction energy term,
E_s[n] = \left\langle \Psi_s[n] \left| \hat T + \hat V_s \right| \Psi_s[n] \right\rangle
where \hat T denotes the kinetic energy operator and \hat V_s is an external effective potential in which the particles are moving, so that n_s(\vec r)\ \stackrel{\mathrm{def}}{=}\ n(\vec r).
\left[-\frac{\hbar^2}{2m}\nabla^2+V_s(\vec r)\right] \phi_i(\vec r) = \epsilon_i \phi_i(\vec r)
which yields the orbitals \,\!\phi_i that reproduce the density n(\vec r) of the original many-body system
n(\vec r )\ \stackrel{\mathrm{def}}{=}\ n_s(\vec r)= \sum_i^N \left|\phi_i(\vec r)\right|^2.
V_s(\vec r) = V(\vec r) + \int \frac{e^2n_s(\vec r\,')}{|\vec r-\vec r\,'|} {\rm d}^3r' + V_{\rm XC}[n_s(\vec r)]
where the second term denotes the so-called Hartree term describing the electron-electron Coulomb repulsion, while the last term \,\!V_{\rm XC} is called the exchange-correlation potential. Here, \,\!V_{\rm XC} includes all the many-particle interactions. Since the Hartree term and \,\!V_{\rm XC} depend on n(\vec r ), which depends on the \,\!\phi_i, which in turn depend on \,\!V_s, the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for n(\vec r), then calculates the corresponding \,\!V_s and solves the Kohn–Sham equations for the \,\!\phi_i. From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.
NOTE: The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. E_s[n] contains kinds of singularities. This may indicate a limitation of our hope for representing exchange-correlation functional in a simple form.
Approximations (exchange-correlation functionals)[edit]
E_{\rm XC}^{\rm LDA}[n]=\int\epsilon_{\rm XC}(n)n (\vec{r}) {\rm d}^3r.
E_{\rm XC}^{\rm LSDA}[n_\uparrow,n_\downarrow]=\int\epsilon_{\rm XC}(n_\uparrow,n_\downarrow)n (\vec{r}){\rm d}^3r.
Highly accurate formulae for the exchange-correlation energy density \epsilon_{\rm XC}(n_\uparrow,n_\downarrow) have been constructed from quantum Monte Carlo simulations of jellium.[13]
Generalized gradient approximations[14][15][16] (GGA) are still local but also take into account the gradient of the density at the same coordinate:
E_{XC}^{\rm GGA}[n_\uparrow,n_\downarrow]=\int\epsilon_{XC}(n_\uparrow,n_\downarrow,\vec{\nabla}n_\uparrow,\vec{\nabla}n_\downarrow)
Using the latter (GGA) very good results for molecular geometries and ground-state energies have been achieved.
Generalizations to include magnetic fields[edit]
Thomas–Fermi model[edit]
The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Thomas and Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every h^{3} of volume.[21] For each element of coordinate space volume d^{3}r we can fill out a sphere of momentum space up to the Fermi momentum p_f [22]
(4/3)\pi p_f^3(\vec{r}).\
Solving for p_{f} and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:
t_{TF}[n] = \frac{p^2}{2m_e} \propto \frac{(n^{1/3})^2}{2m_e} \propto n^{2/3}(\vec{r})\
T_{TF}[n]= C_F \int n(\vec{r}) n^{2/3}(\vec{r}) d^3r =C_F\int n^{5/3}(\vec{r}) d^3r\
where C_F=\frac{3h^2}{10m_e}\left(\frac{3}{8\pi}\right)^{2/3}.\
The kinetic energy functional can be improved by adding the Weizsäcker (1935) correction:[23][24]
T_W[n]=\frac{1}{8}\frac{\hbar^2}{m}\int\frac{|\nabla n(\vec{r})|^2}{n(\vec{r})}d^3r.\
Hohenberg–Kohn theorems[edit]
1.If two systems of electrons, one trapped in a potential v_1(\vec r) and the other in v_2(\vec r), have the same ground-state density n(\vec r) then necessarily v_1(\vec r)-v_2(\vec r) = const.
Corollary: the ground state density uniquely determines the potential and thus all properties of the system, including the many-body wave function. In particular, the "HK" functional, defined as F[n]=T[n]+U[n] is a universal functional of the density (not depending explicitly on the external potential).
2. For any positive integer N and potential v(\vec r), a density functional F[n] exists such that E_{(v,N)}[n] = F[n]+\int{v(\vec r)n(\vec r)d^3r} obtains its minimal value at the ground-state density of N electrons in the potential v(\vec r). The minimal value of E_{(v,N)}[n] is then the ground state energy of this system.
Ab initio Pseudo-potentials
A crucial step toward more realistic pseudo-potentials was given by Topp and Hopfield and more recently Cronin, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo wave-functions to coincide with the true valence wave functions beyond a certain distance rl_.. The pseudo wave-functions are also forced to have the same norm as the true valence wave-functions and can be written as
R_{\rm l}^{\rm pp}(r)=R_{\rm nl}^{\rm AE}(r).
\int_{0}^{rl}dr|R_{\rm l}^{\rm PP}(r)|^2r^2=\int_{0}^{rl}dr|R_{\rm nl}^{\rm AE}(r)|^2r^2.
where R_{\rm l}(r). is the radial part of the wavefunction with angular momentum l_., and pp_. and AE_. denote, respectively, the pseudo wave-function and the true (all-electron) wave-function. The index n in the true wave-functions denotes the valence level. The distance beyond which the true and the pseudo wave-functions are equal, rl_., is also l_.-dependent.
Software supporting DFT[edit]
See also[edit]
2. ^ Van Mourik, Tanja; Gdanitz, Robert J. (2002). "A critical note on density functional theory studies on rare-gas dimers". Journal of Chemical Physics 116 (22): 9620–9623. Bibcode:2002JChPh.116.9620V. doi:10.1063/1.1476010.
3. ^ Vondrášek, Jiří; Bendová, Lada; Klusák, Vojtěch; Hobza, Pavel (2005). "Unexpectedly strong energy stabilization inside the hydrophobic core of small protein rubredoxin mediated by aromatic residues: correlated ab initio quantum chemical calculations". Journal of the American Chemical Society 127 (8): 2615–2619. doi:10.1021/ja044607h. PMID 15725017.
4. ^ Grimme, Stefan (2006). "Semiempirical hybrid density functional with perturbative second-order correlation". Journal of Chemical Physics 124 (3): 034108. Bibcode:2006JChPh.124c4108G. doi:10.1063/1.2148954. PMID 16438568.
5. ^ Zimmerli, Urs; Parrinello, Michele; Koumoutsakos, Petros (2004). "Dispersion corrections to density functionals for water aromatic interactions". Journal of Chemical Physics 120 (6): 2693–2699. Bibcode:2004JChPh.120.2693Z. doi:10.1063/1.1637034. PMID 15268413.
6. ^ Grimme, Stefan (2004). "Accurate description of van der Waals complexes by density functional theory including empirical corrections". Journal of Computational Chemistry 25 (12): 1463–1473. doi:10.1002/jcc.20078. PMID 15224390.
7. ^ Von Lilienfeld, O. Anatole; Tavernelli, Ivano; Rothlisberger, Ursula; Sebastiani, Daniel (2004). "Optimization of effective atom centered potentials for London dispersion forces in density functional theory". Physical Review Letters 93 (15): 153004. Bibcode:2004PhRvL..93o3004V. doi:10.1103/PhysRevLett.93.153004. PMID 15524874.
8. ^ Tkatchenko, Alexandre; Scheffler, Matthias (2009). "Accurate Molecular Van Der Waals Interactions from Ground-State Electron Density and Free-Atom Reference Data". Physical Review Letters 102 (7): 073005. Bibcode:2009PhRvL.102g3005T. doi:10.1103/PhysRevLett.102.073005. PMID 19257665.
9. ^ a b Hohenberg, Pierre; Walter Kohn (1964). "Inhomogeneous electron gas". Physical Review 136 (3B): B864–B871. Bibcode:1964PhRv..136..864H. doi:10.1103/PhysRev.136.B864.
10. ^ Levy, Mel (1979). "Universal variational functionals of electron densities, first-order density matrices, and natural spin-orbitals and solution of the v-representability problem". Proceedings of the National Academy of Sciences (United States National Academy of Sciences) 76 (12): 6062–6065. Bibcode:1979PNAS...76.6062L. doi:10.1073/pnas.76.12.6062.
11. ^ a b Vignale, G.; Mark Rasolt (1987). "Density-functional theory in strong magnetic fields". Physical Review Letters (American Physical Society) 59 (20): 2360–2363. Bibcode:1987PhRvL..59.2360V. doi:10.1103/PhysRevLett.59.2360. PMID 10035523.
12. ^ Kohn, W.; Sham, L. J. (1965). "Self-consistent equations including exchange and correlation effects". Physical Review 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K. doi:10.1103/PhysRev.140.A1133.
13. ^ John P. Perdew, Adrienn Ruzsinszky, Jianmin Tao, Viktor N. Staroverov, Gustavo Scuseria and Gábor I. Csonka (2005). "Prescriptions for the design and selection of density functional approximations: More constraint satisfaction with fewer fits". Journal of Chemical Physics 123 (6): 062201. Bibcode:2005JChPh.123f2201P. doi:10.1063/1.1904565. PMID 16122287.
14. ^ Perdew, John P; Chevary, J A; Vosko, S H; Jackson, Koblar, A; Pederson, Mark R; Singh, D J; Fiolhais, Carlos (1992). "Atoms, molecules, solids, and surfaces: Applications of the generalized gradient approximation for exchange and correlation". Physical Review B 46 (11): 6671. doi:10.1103/physrevb.46.6671.
15. ^ Becke, Axel D (1988). "Density-functional exchange-energy approximation with correct asymptotic behavior". Physical Review A 38 (6): 3098. doi:10.1103/physreva.38.3098.
16. ^ Langreth, David C; Mehl, M J (1983). "Beyond the local-density approximation in calculations of ground-state electronic properties". Physical Review B 28 (4): 1809. doi:10.1103/physrevb.28.1809.
17. ^ Grayce, Christopher; Robert Harris (1994). "Magnetic-field density-functional theory". Physical Review A 50 (4): 3089–3095. Bibcode:1994PhRvA..50.3089G. doi:10.1103/PhysRevA.50.3089. PMID 9911249.
18. ^ Viraht, Xiao-Yin (2012). "Hohenberg-Kohn theorem including electron spin". Physical Review A 86. Bibcode:1994PhRvA.86.042502. doi:10.1103/physreva.86.042502.
19. ^ Segall, M.D.; Lindan, P.J (2002). "First-principles simulation: ideas, illustrations and the CASTEP code". Journal of Physics: Condensed Matter 14 (11): 2717. Bibcode:2002JPCM...14.2717S. doi:10.1088/0953-8984/14/11/301.
21. ^ (Parr & Yang 1989, p. 47)
23. ^ Weizsäcker, C. F. v. (1935). "Zur Theorie der Kernmassen". Zeitschrift für Physik 96 (7–8): 431–58. Bibcode:1935ZPhy...96..431W. doi:10.1007/BF01337700.
24. ^ (Parr & Yang 1989, p. 127)
Key papers[edit]
External links[edit] |
424f8cc6c8dfae6d | Dismiss Notice
Join Physics Forums Today!
QED Lagrangian lead to self-interaction?
1. Jul 16, 2008 #1
Hi, I'm not very good at QFT, but starting from Dirac's QED Lagrangian (look for example at: wikipedia/Quantum_electrodynamics )
mc\gamma^0 \right)\psi-\frac{1}{4\mu_0}F_{\mu\nu}F^{\mu\nu}
From here we derive Dirac's equation and Maxwell's equations. Now omitting all derivation steps, every thing could be summarized into the non-relativistic limit as:
\frac{q^2}{4\pi\epsilon}\int\frac{\mid\psi(\vec{r})\mid^2}{\mid \vec{r}-\vec{r}^{\prime} \mid}d^3\vec{r}^{\prime}\right)\psi=E\psi
This is in fact what the QED Lagrangian result in (ignoring the very small contribution from the magnetic vector potential A for simplicity), but the effective Schrödinger equ looks more like Kohn-Sham equation for a single particle. But is this correct?
The Coulomb integral suggest that the electron is self-interacting with itself! (compare coulomb blocking) I thought that the correct limit has to be Schrödinger equation, but with some funny coupling term (but not so strong as coulomb self-interaction). Is it the field operators that are needed in some way to vanish this term, when it is not "appropriate"?
2. jcsd
3. Jul 17, 2008 #2
I hoped somebody else would answer this question:-) I am afraid I have neither a clear answer nor time to sort this out, but I suspect this question is considered in Barut's works on self-field electrodynamics and in nightlight's posts in this forum.
4. Jul 18, 2008 #3
YES!!! you guys are correct! That is the correct form of the equation. Indeed this is Barut's SFQED.
Now you just have to put in the self-interaction from the vector potential, and you can do all of QED with a first-quantized formalism!
Last edited: Jul 18, 2008
5. Jul 18, 2008 #4
Also, you'll notice that on this view, the physically correct Schroedinger equation for a charged particle always has those self-interaction terms, and therefore the physically correct Schroedinger (or Dirac) equation is not actually a linear equation, but a nonlinear integro-differential equation. This has profound implications for the interpretations of QM. Most notably, the Everett MWI interpretation is not consistent with a Schroedinger equation like this with self-interaction terms.
6. Jul 18, 2008 #5
If you mean the standard second-quantized QED in its entirety, I suspect this statement requires a few caveats. First, Barut introduced the Feynman propagator for the electromagnetic field (which corresponds to a complex Lagrangian) - that was an additional assumption. Second, to implement Pauli's exclusion principle in his theory, Barut had to modify the Lagrangian. Furthermore, Barut did not claim his theory exactly reproduced all the results of the standard QED, rather he hoped experiments would favor his theory if and where it differed from QED.
7. Jul 18, 2008 #6
I have always suspected that this is the case, but have never seen the equations interpreted that way in a mainstream context.
If such a result was more widely accepted, it could have profound implications for quantum computing. Mark Oskin, from the University of Washington CS department, explains the time evolution of the quantum state in terms of a unitary operator:
ψ' = Uψ
And then he comments:
"The fact that U cannot depend ψ and only on t1 and t2 is a subtle and disappointing fact. We will see later that if U could depend on ψ then quantum computers could easily solve NP complete problems!"
8. Jul 18, 2008 #7
I mean you can calculate all radiative effects that are calculate with 2nd quantized QED, but with Barut's 1st quantized formalism.
The Feynman propagator he uses, yes, was an additional assumption, but can't really be called "2nd quantization", and is not necessarily problematic.
The exclusion principle part I'm not sure about.
Even though Barut hoped his theory could be differentiated from standard QED, that was only a hope and he had no real physics-based reason to believe this would be the case (in fact there is reason to think the theories are empirically equivalent). And to the extent that SFQED was applied to all radiative processes in QED, it made the same predictions. That's why I said you could do all of QED with his theory.
9. Jul 18, 2008 #8
That would be interesting if this was an implication for quantum computing, but I don't know enough about this.
10. Jul 18, 2008 #9
I found the link I was looking for to the paper containing the statement:
Quantum Computing Lecture Notes
I don't think that the quantum computing people will take notice until someone comes up with a method that could in theory exploit the nonlinear self-interaction term to create a general logic gate that transforms qubits with regard for their current state.
11. Jul 18, 2008 #10
Sounds like it's worth looking into from a self-field approach.
12. Jul 18, 2008 #11
Thanks. I have to check this guy Barut I think.
I solved the self-consistent problem today for a free electron in vacuum. You then got a bound state and at the same time, the electric field that wave function produces also contain that energy, so energy is conserved. The electron is then a blob on its own and not a wave. But isn't this unphysical?
13. Jul 18, 2008 #12
What do you mean it is a blob on its own? The wavefunction solution to the nonlinear S.E. is still a Fourier expansion of waves.
By the way, Barut et al. have treated the case of the nonrelativisic free particle of their theory:
Quantum electrodynamics based on self-fields, without second quantization: A nonrelativistic calculation of g-2
A. O. Barut, Jonathan P. Dowling, J. F. van Huele
14. Jul 18, 2008 #13
"not necessarily problematic" - maybe, but some knowledgeable people (such as Bialynicki-Birula) contend the use of the Feynman propagator is equivalent to 2nd quantization of the electromagnetic field in the absence of external photon lines.
I'll try to find a reference if and when I have time. Same for the reference to Bialynicki-Birula.
Actually, it was the word "just" in your phrase "Now you just have to put in the self-interaction from the vector potential, and you can do all of QED with a first-quantized formalism!" that triggered my previous post.
Maybe I missed something, but I'm not sure he "hoped his theory could be differentiated from standard QED", my impression was he would have preferred if the results had been the same. Maybe I'm wrong though.
15. Jul 18, 2008 #14
I have already read the Birula paper. I meant two things: that the matter field in Barut is not 2nd quantized (that's obvious), and that although Feynman and Birula say the use of the Feynman propagator is equivalent to 2nd quantization of the EM field in the absence of external photon lines, this is not the same as saying that the self-field is 2nd quantized, in my opinion. Maybe it's just semantics, but it's not so clear to me what "2nd quantized" means with respect to the self-field. If they just mean use of the complex-valued Feynman propagator instead of the real-valued classical Green's propagator, well, OK, but then both parts of the term "2nd quantized" seem to me a misnomer. I mean, the self-field is not an operator-valued field, nor is it decomposable into quantized harmonic oscillators. One could certainly however say that it is a 1st quantized self-field because after all the electron charge is coupled to the 1st quantized matter current density, but that's about it as far as I can see.
OK, for the Pauli exclusion principle, thanks.
I have asked Dowling about this and read some papers where they pretty much say they did hope it would be a different theory than standard perturbative QED. The only reasons to think this is that their method of solution is different, being a nonperturbative iteration procedure with Mellin-Barnes transforms, as opposed to asymptotic expansions with renormalization. But all the QED phenomena they did treat in their theory gave the same results to lowest orders in Z*alpha. Moreover, the equivalence of eliminating the 2nd quantized free field with the self-field as Feynman and Birula mention would also suggest to me an empirical equivalence for QED phenomena, even if the methods of solution are different.
There is however one place where a difference of predictions does seem to exist between the two theories (and I think it suggests that perhaps perturbative QED is after all an approximation to the Barut theory), namely, the old cosmological constant problem. Perturbative QED predicts an infinite vacuum energy density (even in the absence of matter) whose absolute value induces infinite spacetime curvature according to the Einstein field equation. But the Barut theory does not predict any such infinite vacuum energy density, with or without the presence of matter. So it easily solves the old cosmological constant problem. That to me seems like a significant difference, but one based on an intertheoretic consideration. Actually, this also ties into the fact that SFQED gives finite answers whereas perturbative QED gives infinite bare values. So maybe you could indeed have good reason to say that perturbative QED is an approximation to SFQED.
Last edited: Jul 18, 2008
16. Jul 18, 2008 #15
I should also add that Barut and Dowling did in fact extend their approach to 2nd quantized matter fields and got the same answers.
17. Jul 18, 2008 #16
User Avatar
Science Advisor
The idea of self energy makes great sense, and is, in fact, forced on us by the basic structure of the QED interaction, for example. It is evident in Poynting's thrm, and in the old adiabatic assembling of a charge. That self energy shows up is no surprise, so the issue is what do you do with it? And, the jury is still out.
Fortunately, our inability to deal with this concept has not precluded great advances in QED, the Standard Model and on.... What we've learned from QED is that the corrections due to self energy and polarization of the vacuum and charge screening (Corrections due to vertex diagrams) are very small, and have virtually no effect on physics at an atomic or molecular or nuclear scale. Non-corrected theory works just fine in those regions of physics. So, typically we throw out the self energy terms, with a nod to empirical justification. In the relativistic case, we throw away the infinities that plague us, but in a way that is astonishingly accurate.
It ain't pretty, but it's the best we have. Great opportunity indeed.
Reilly Atkinson
18. Jul 18, 2008 #17
I strongly disagree with this characterization of corrections due to self energy as having "virtually no effect on physics at an atomic or molecular or nuclear scale". The Lamb shift, spontaneous emission, corrections to g-2, and cavity QED effects, are all examples of highly nontrivial physical phenomena in various parts of AMO and nuclear physics. Moreover, Barut's self-field approach is the most explicit example of how self-energy is indispensable to said QED phenomena.
19. Jul 19, 2008 #18
So here's the reference.
A.O. Barut, "Foundations of Self-Field Quantumelectrodynamics", in: "New Frontiers in Quantum Electrodynamics and Quantum Optics", Ed. by A.O. Barut, NATO ASI Series V.232, 1991, p. 358:
"For two identical particles we use the postulate of the first quantized quantum theory that the field is symmetric or antisymmetric under the interchange of all dynamical variables of identical particles. In our formulation we go back to the original action principle and assume that the current $j_\mu$ is antisymmetric in the two fields
$e_1=e_2=e$ (52)
This implies in the interaction action
$W_{\textrm{int}}=\frac{1}{4}e^2\left[\int dx dy \bar{\psi_1}(x)\gamma_\mu\psi_2(x)D(x-y)
\bar{\psi_1}(y)\gamma_\mu\psi_2(y)-\int dx dy \bar{\psi_1}\gamma_\mu\psi_2D(x-y)
\bar{\psi_2}\gamma_\mu\psi_1+(1\leftrightarrow2)\right]$ (53)"
20. Jul 19, 2008 #19
Thanks for this reference! I have seen this paper before, and only vaguely recall this part. I don't really consider this an "additional assumption". It is simply a consequence of staying in the first quantized matter formalism. In any case, thanks.
21. Jul 19, 2008 #20
In my book, if one changes the Lagrangian, he does introduce an additional assumption. So I guess we disagree on this point.
Another thing. You see, to "stay in the first quantized matter formalism", Barut "smuggles" second quantization for the electromagnetic field by using the Feynman propagator, and "smuggles" something similar for the electron field by changing the Lagrangian. I am not trying to criticize Barut, I am just trying to say that his self-field electrodynamics is an unfinished business. It is not easy to determine its exact status. It took me quite some time to sort it out, and I am not sure I have a clear picture now. So I try to tread carefully. It is too easy to say something that is not quite accurate. Sometimes I had to admit in this forum that I had made a mistake. Things are just too often not quite what they look.
Have something to add?
Similar Discussions: QED Lagrangian lead to self-interaction?
1. Interaction in QED (Replies: 3) |
44742ffff96cc89c | Saturday, July 28, 2007
I am guilty of frequently using physics speech in daily life, an annoying habit I also noticed among many of my colleagues [1]. You'll find me stating "My brain feels very Boltzmannian today", or "The customer density in this store is too high for my metastable mental balance". I have a friend who calls Chinese take out "the canonical choice" and another friend who, when asked whether he had made a decision, famously explained "I don't yet want my wave-function to collapse". My ex-boyfriend once called it "the physicist's Tourette-syndrome" [2].
One of my favourite physics-speech words is self-consistent. Self-consistency is tightly related to nothing. You know, that "nothing" that causes your wife to conclude her whole life is a disaster, we're all going to die in a nuclear accident, her glasses vanished (again!), and btw that's all your fault (obviously). But if you ask her what's the matter. Well, nothing.
"There's nothing I hate more than nothing
Nothing keeps me up at night
I toss and turn over nothing
Nothing could cause a great big fight
Hey -- what's the matter?
Don't tell me nothing."
~Edie Brickell, Nothing
1. Self-consistent
Science is our attempt to understand the world we live in. We observe and try to find reliable rules upon which to build our expectations. We search for explanations that are useful to make predictions, a framework to understand our environment and shape our future according to our needs. If our observations disagree with our rules, or observations seemingly disagree with each other (I swear I left my glasses in the kitchen), we are irritated and try to find a mistake. Something being in contradiction with itself [3] is what I mean with not self-consistent (What's the matter? - Nothing!).
On a mathematical basis this is very straight forward. E.g. If you assume my mood is given by a real valued continuous function f on the compact interval [now, then] with f(now)f(then) smaller than 0, this isn't self-consistent with the expectation it can do so without having a zero [4]. For more details on my mood, see sidebar.
Self-consistency is a very powerful concept in theoretical physics: if one talks about a probability, that probability better should not be larger than one. If one starts with the axioms of quantum mechanics, it's not self-consistent to talk about a particle's definite position and momentum. The speed of light being observer independent is not compatible with Galileo invariance and the standard addition law for velocities. Instead, self-consistency requires the addition law to be modified. This lead Einstein to develop Special Relativity.
A particularly nice example comes from multi-particle quantum mechanics, where an iterative approach can be used to find a 'self-consistent' solution for the electron distribution e.g. in a crystal or for an atom with many electrons (see self-consistent field method or Hartree-Fock method). A state of several charged particles will not be just a tensor product of the single particles, since the particles interact and influence each other. One starts with the tensor product as a 'guess' and applies the 'rules' of the theory. That is, by solving the Schrödinger equation with the mean- field potential which effectively describes the interaction, a new set of single particle wave functions can be computed. This result will however in general not agree with the initial guess: it is not self-consistent. In this case, one repeats the procedure with using the result as an improved guess. Given that the differential equations behave nicely, this iterative procedure leads one to find a fixed point with the properties that the initial distribution agrees with the resulting one: it is self-consistent.
A similar requirement holds for quantum corrections. A theory that is subject to quantum corrections but whose initial formulation does not take into account the existence of such extra terms is strictly speaking not self-consistent (see also the interesting discussion to our recent post on Phenomenological Quantum Gravity).
There are some subtleties one needs to consider, most importantly that our knowledge is limited in various regards. Self-consistency might only hold under certain assumptions or in certain limiting regimes, like small velocities (relative to the speed of light), large distances (relative to the Planck length) or at energies below a certain threshold. Likewise, not being self-consistent might be the result of having applied a theory outside these limits (typically, using an expansion outside a radius of convergence). In some cases (gravitational backreaction), violations of self-consistency can be negligible.
However, one might argue if it is possible at all to arrive at such a disagreement then at least one of the assumptions was unnecessary to begin with, and could have been replaced by requiring self-consistency. Unfortunately, this is often more easily said than done -- physics is not mathematics. We rarely start with writing down a set of axioms which one could check for self-consistency. Instead, in many cases one starts with little more than a patchwork of hints, and an idea how to connect them. Self-consistency in this case is somewhat more subtle to check. My friends and I often kill each others ideas by working out nonsensical consequences. Here, at least as important as self-consistency is that a theory in physics also has to be consistent with observation.
2. Consistent with Observation
The classical Maxwell-Lorentz theory is self-consistent. However, it is in disagreement with the stability of the atom. According to the classical theory, an electron circling around the nucleus should radiate off energy. The solution to this problem was the development of quantum mechanics. The inconsistency in this case was one with observation. Without quantizing the orbits of the electron, atoms would not be stable, and we would not exist.
This requirement is specific to sciences that describe the real world out there. Such a theory can be 'wrong' (not consistent with observation) even though it is mathematically sound. Sometimes however, these two issues get confused. E.g. in a recent Discover issue, Seth Lloyd wrote:
"The vast majority of scientific ideas are (a) wrong and (b) useless. The briefest acquaintance with the real world shows that there are some forms of knowledge that will never be made scientific [...] I would bet that 99.8 percent of ideas put forth by scientists are wrong and will never be included in the body of scientific fact. Over the years, I have refereed many papers claiming to invalidate the laws of quantum mechanics. I’ve even written one or two of them myself. All of these papers are wrong. That is actually how it should be: What makes scientific ideas scientific is not that they are right but that they are capable of being proved wrong."
~Seth Lloyd, You know too much
The current issue now had a letter in reply to this article:
"I was taken aback by Seth Lloyd's assertion that "99.8 percent of ideas put forth by scientists are [probably] wrong" and even more so by his statement that "of the 0.2 percent of ideas that turn out to be correct ... [t]he great majority of them are relatively useless." His thesis omits a basic trait of what we call science -- that it is a continuous fabric, weaving all provable knowledge together [...] we do science for a science sake, because a fundamental principle of science is that we never know when a discovery will be useful"
~Eric Fisher, Springfield, IL.
Well, the majority of my scientific ideas are definitely (a) wrong and (b) useless, but these usually don't end up in a peer review process. However, the reply letter apparently referred to the word 'correct' as 'provable knowledge', and to science as the 'weave' of all that knowledge. It might indeed be that the mathematical framework of a theory that is not consistent with observation turns out to be useful later but that doesn't change the fact that this idea is 'wrong' in the meaning that it does not describe nature. Peer review today seems to be mostly concerned with checking self-consistency, whereas being non-consistent with observation is ironically increasingly tolerated as a 'known problem'. Like, the CC being 120 orders of magnitude too large is a known problem. Oohm, actually the result is just infinity. But, hey, you've turned your integration contour the wrong way, the result is not infinity, but infinity + 2 Pi.
The requirement of consistency with observation was for me the main reason to chose theoretical physics over maths. The world of mathematics, so I found, is too large for me and I got lost in following runaway thoughts, or generalizing concepts just because it was possible. It is the connection to the real world, provided by our observations, that can guide physicists through these possibilities and lead the way. (And, speaking of observations and getting lost, I'd really like to know where my glasses are.)
3. Self-contained
Unlike maths, theoretical physics aims to describes the real world out there. This advantageous guiding principle can also be a weakness when it comes to the quantities we deal with. Mathematics deals with well defined quantities whose properties are examined. In physics one wants to describe nature, and the exact definitions of the quantities are in many cases subject of discussion as well. Consider how our understanding of space and time has changed over the last centuries!
In physics it has often happened that concepts of a theory's constituents only developed with the theory itself (e.g. the notion of a tensor or the Fock-space). As such it happens in physics that one can deal with quantities even though the framework does not itself define them. One might say in such a case the theory is incomplete, or not self-contained.
Due to this complication, I've known more than one mathematician who frowned upon approaches in theoretical physics as too vague, whereas physicists often find mathematical rigour too constraining, and instead prefer to rely on their intuition. Joe Polchinski expressed this as follows:
"[A] chain of reasoning is only as strong as its weakest step. Rigor generally makes the strongest steps stronger still - to prove something it is necessary to understand the physics very well first - and so it is often not the critical point where the most effort should be applied. [A]nother problem with rigor [is]: it is hard to get it right. If one makes one error the whole thing breaks, whereas a good physical argument is more robust."
~Joe Polchinski, Guest Post at CV
When it comes to formulating an idea, physicists often set different priorities than mathematicians. In some cases it might just not be necessary to define a quantity because one can sit down and measure it (e.g. the PDFs). Or, one can just leave a question open (will be studied in a forthcoming publication) and get a useful theory nevertheless. All of our present theories leave questions open. Despite this being possible, it is unsatisfactory, and the attempt to make a theory self-contained has lead to many insights throughout the history of science.
Newton's dynamics deals with forces, yet there is nothing in this framework that explains the origin of a force. It contains masses, yet does not explain the origin of masses. Maxwell's theory provides an origin of a force (electromagnetic). It has a source term (J), yet it does not explain the dynamics of the source term. This system has to be closed, e.g. with minimal coupling to another field whose dynamics is known. The classical Maxwell-Lorentz theory does this, it is self-contained and self-consistent. However, as mentioned above, this theory is not consistent with observation. Today we know the sources for the electromagnetic field are fermions, they obey the Dirac equation and Fermi statistic. However, if you look at an atom close enough you'll notice that quantum electrodynamics alone also isn't able to describe it satisfactory...
Besides the existence of space and time per se, the number of space-time dimensions is one of these open questions that I find very interesting. It has most often been an additional assumption. An exception is string theory where self-consistency requires space-time to have a certain number of dimensions. However - if it also contains an explanation why we observe only three of them, nobody has yet found it. So again, we are left with open questions.
4. Simple and Natural [5]
The last guiding principle that I want to mention is simplicity, or the question whether one can reduce a messy system of axioms and principles to something more simple. Is there a way to derive the parameters of the standard model from a single unified approach? Is there a way to derive the axioms of quantization? Is there a way to derive that our spacetime has dimension three, or Lorentzian signature?
In my opinion, simplicity is often overrated compared to the first three points I listed. We tend to perceive simplicity as elegance or beauty, concepts we strive to achieve, but these guidelines can turn out to be false friends. If you can find your glasses, look around and you'll notice that the world has many facettes that are neither elegant nor simple (like my husband impatiently waiting for me to finish). Even if you'd expect the underlying laws of nature to be simple, you'll still have to make the case that a certain observable reflects the elementary theory rather than being a potentially very involved consequence of a complex dynamical system, or an emergent feature. A typical example are the average distances of planets from the sun, a Sacred Mystery of the Cosmos that today nobody would try to derive from a theory of first principles (restrictions apply).
Also, we tend to find things simpler the more familiar we are with them, up to the level of completely forgetting about them (did you say something?). E.g. we are so used to starting with a Lagrangian that we tend to forget that its usefulness rests on the validity of the action principle. It is also quite interesting to note that researchers who are familiar with a field often find it 'simple' and 'natural'... I therefore support Tommaso's suggestions to renormalize simplicity to the generalized grandmother.
In this regard I also want to highlight the argument that one can allegedly derive all the parameters in the standard model 'simply' from today's existence of intelligent life. Notwithstanding the additional complication of 'intelligent', could somebody please simply explain 'existence' and 'life'?
Much like classical electrodynamics, Einstein's field equations too have a source term whose dynamics one needs to know. The system can be closed with an equation of state for each component. This theory is self-consistent [6], and it is consistent with all available observations. It reaches its limits if one asks for the microscopic description of the constituents. The transition from the macro- to the microscopic regime can be made for the sources of the gravitational field, but not also for the coupled gravitational field (oh, and then there's the CC, but this is a known problem).
Two theories that yield the same predictions for all observables I'd call equivalent (if you don't like that, accept it as my definition of equivalence.) But our observations are limited, and unlike the case of classical electrodynamics not being consistent with the stability of the atom, there is presently no observational evidence in disagreement with classical gravity.
For me this then raises the question:
Is there more than one theory that is self-consistent, self-contained and consistent with all present observations?
In a recent comment, Moshe remarked:"To paraphrase Ted Jacobson, you don't quantize the metric for the same reason you don't go about quantizing ocean waves." That sounds certainly reasonable, but if I look at water close enough I will find the spectral lines of the hydrogen atom and evidence for its constituents. And their quantization. To me, this just doesn't satisfactory solve the question what the microscopic structure of the 'medium', here space-time, is.
And what have we learned from all this...?
Let me go back to the start: If you ask a question and the answer is 'Nothing', you most likely asked the wrong question, or misunderstood the answer.
Ah... Stefan found my glasses (don't ask).
See also: Self-Consistency at The Reference Frame
[1]This habit is especially dominant -- and not entirely voluntarily -- among the not native English speakers, whose vocabulary naturally is most developed in the job related area.
[2] Unintentional cursing and uttering of obscenities, called Coprolalia, is actually only a specific feature of the Tourette syndrom.
[3] However, some years ago I was taught the word 'self-consistency' in psychology has a different meaning, it refers to a person accumulating knowledge from his/her own behaviour. A person whose thoughts and actions are in agreement and not in contradiction is called 'clear'. (At least in German. I couldn't find any reference to this online, and I'm not a psychologist, so better don't trust me on that.).
[4] See:
Bolzano's theorem.
[5] "Woman on Window", by F.L. Campello.
For more, see here.
[6] Note that this theory is self-consistent at arbitrary scales as long as you don't ask for the microscopic origin of the sources.
TAGS: , ,
The most spherical object ever made... used for the gyroscopes in NASA's Gravity Probe B. Launched in April 2004, Gravity Probe B tests two effects predicted by Einstein's theory: the geodetic effect and the frame-dragging (see here for a brief intro).
In order for Gravity Probe B to measure these tiny effects, it must use a gyroscope that is nearly perfect—one that will not wobble or drift more than 10-12 degrees per hour while it is spinning.
"A nearly-perfect gyroscope must be nearly perfect in two ways: sphericity and homogeneity. Every point on its surface must be exactly the same distance from the center (a perfect sphere), and its structure must be identical from one side to the other [...]
After years of research and development, Gravity Probe B produced just such a gyroscope. It is a 1.5-inch sphere of fused quartz, polished and “lapped” to within a few atomic layers of perfect sphericity. A scan of its surface shows that only .01 microns separate the highest point from the lowest point. Transform the gyroscope into the size of the Earth and its highest mountains and deepest ocean trenches would be a mere eight feet from sea level!"
Thursday, July 26, 2007
FIAS, the Frankfurt Institute for Advanced Studies
This week, I was again at the new campus of my old university. The science departments of the Johann Wolfgang Goethe University are all moving out of downtown Frankfurt into the fields of Niederursel, where new buildings keep springing up at an extraordinary rate. One of these new buildings is especially eye-catching with its bright-red finish.
This is the new building of FIAS, the Frankfurt Institute for Advanced Studies, and it's interesting not only because of its colour - it's one of the first public research institutes in Germany financed to a large extent by the money of private sponsors.
Universities in Germany have traditionally been financed by public money of the state and federal governments, and they usually don't have large funds at their own. Frankfurt University is a bit special in this respect, since it has been founded in 1914 by wealthy Frankfurt citizens. While today it is a publicly funded university as it is common in Germany, there is a strong tradition of private sponsoring of research and higher education.
So, a few years ago, theoretical physicist Walter Greiner and neuroscientist Wolf Singer started using their connections to raise private funds to establish a new kind of institute, which was supposed to be legally independent, but closely connected to the university and its science departments. It should bring together theorists from such diverse areas as biology, chemistry, neuroscience, physics, and computer science in order to address problems all revolving around a common theme: The study of structure formation and self-organization in complex systems.
This was the beginning of FIAS.
Today, there are more than 50 scientists, guests and students working together on cooperative phenomena on length scales ranging from quarks in colour superconductivity and heavy ion collisions over atoms in atomic clusters and macromolecules to cells in the immune system and the brain. Details and more links can be found on the pages of the FIAS scientists.
The training of graduate students is organized in a Graduate School. Last summer, I was involved in the compilation of a brochure presenting the FIAS, and I was fascinated by the really inspiring atmosphere among the students, who come from all over the world and form very diverse scientific backgrounds, but were always involved in interesting discussions.
In September, the FIAS is supposed to move into the new, red building, which was built for the institute by a private sponsor, the Giersch Foundation. There, FIAS scientist will have a place to work and think - it will be interesting to follow the outcome of this kind of "experiment".
Tuesday, July 24, 2007
Don't fart
Okay, it's unlikely you visit this blog to hear my opinion about farting, but I just read this article in New Scientist
How the obesity epidemic is aggravating global warming
(Issue June 30th - July6th, p. 21)
which is the most ridiculous fart line up of weak links designed to support a specific opinion that I've come across lately. The argumentation of the author, Ian Roberts (a professor of public health in London), is roughly: if you're fat you are wasting energy. Either by storing fat such that it can't even be used as bio fuel, or by moving it around with the help of gasoline powered transportation devices.
To begin with, despite of what the title says, the author does not actually talk about global warming, but about wasting energy. The connection between both is just assumed in the first sentence with 'we know humans are causing [global warming]', and not even once addressed after this. On the other hand, also the connection between wasting energy and obesity is constructed to make the point that you should loose weight to save the earth:
"[...] it is becoming clear that obese people are having a direct impact on the climate. This is happening through their lifestyles and the amount and type of food they eat, and the worse the obesity epidemic gets the greater its impact on global warming."
Well, if one wants to criticize a lifestyle, then one should criticise a lifestyle, but not add several associative leaps after that. Let us start with asking what exactly is a 'waste' of energy? Using energy for purposes that do not necessarily improve our well-being could generally be considered a waste. That goes for breaking a cellphone (consider all the energy needed to produce it), browsing the web the whole day (your home wireless doesn't run on vacuum energy) as well as for unnecessary consumption of food for whose production energy was needed.
However, whether that food is actually eaten or thrown away is completely irrelevant in this context. Also, on an equal footing one can argue that the mere presence of diet products damages the climate: it takes energy to produce and transport them, but the energy gain after consumption is lowered. Is there any reason to waste energy on producing diet coke when one can as well drink water? And while we're at it, is there any reason to go jogging every morning - isn't that just a waste of energy? Come to think about it, civilization itself seems to be a waste of energy.
The article goes on arguing
"[...] his greater bulk and higher metabolic rate will cause him to feel the heat more in the globally warmed summers, and he will be the first to turn on the energy intensive air conditioning."
If one argues that overweight people turn on the AC more often because they sweat more easily, one might want to take into account that underweight (or generally sickly) people tend to turn on the heating more often. People who suffer from back pain, arthritis and shortness of breath might use their car more often (as the article states), but this must not necessarily be a cause of obesity. The only thing one can state is that being healthy and well adapted to the part of the world you live in minimizes the additional energy needed to survive and feel comfortable (how 'needed' relates to 'actually used' is a completely different question).
I am definitely in favor of more sidewalks, of increased awareness for health risks caused by obesity, and I totally agree that we should save energy. But I would appreciate a scientific discussion of these issues, and not a mixed up mesh of several issues all drowned in politcal correctness.
In a similar spirit I read last week several articles claiming "Meat is murder on the environment" or likewise, a 'conclusion' based on a paper "Evaluating environmental impacts of the Japanese beef cow–calf system by the life cycle assessment method" (published in Animal Science Journal 78 (4), 424–432)
Being a vegetarian myself, I could give you a good number of reasons to drop the meat, but nothing you wouldn't find online in some thousand other places, so let me just focus on the issue at hand. If you want to save energy with the food you buy and eat, the most important factor to consider is origin and transportation.
• Your apple from New-Zealand, labeled 'bio' or not, doesn't tunnel to you. In fact you could say since, unlike beef, vegetables and friuts consist mostly of water, the amount of gasoline needed per energy content (joule) of transported food is higher for greens. So, preferably buy stuff that was not transported all around the globe whenever you can.
• If you buy products from countries where slash and burn is still practiced, you're damaging the environment more than if you support your local farmer - even if he's somewhat more expensive than Safeway.
• And, needless to say, don't buy stuff you don't need. Each time you have to throw something away, you are throwing away all the energy that was necessary to produce it. That doesn't only go for food, but for everything else including wrappings.
I want to add that much like cows, human flatulence as well release methane, which is said to contribute to global warming. So maybe we should consider a national anti-fart campaign? Regarding the vegetarian factor, also please note that "The cellulose in vegetables cannot be digested, therefore vegetarians produce more gas than people with a mixed diet." [source]
The bottomline of this writing is: don't construct or publish ridiculous cross-relations that are scientifically doubtful for a catchy headline.
See also: Global Warming
Monday, July 23, 2007
This and That
• I am very proud to report that I eventually managed to install a recent-comments-box in the sidebar!! Thanks go via several detours back to Clifford.
• Flip has an excellent post on The Braneworld and the Hierarchy in the Randall Sundrum (I) model
• Hey America, Germany is catching up.
• Idea of the day: I suggest that journals which reject more than 70% of submitted manuscripts should offer a consolidation gift. What I have in mind is a shirt saying "My manuscript went to PRD and all I got was this lousy T-shirt".
• Ever felt like your brain is too small? Think twice (if you have capacity left): Man with tiny brain shocks doctors
• Coincidentally, I came across the German version of Lee Smolin's book Warum gibt es die Welt? (Life of the Cosmos), which I found somewhat disturbing (I mean, even more than the English version). Among other things (that concern Japanese surfer) I learned that New York is the largest city on the planet (such the re-translation). Apologies to the translator*, but should you consider buying that book, I strongly recommend the English version (to read the original sentence go to amazon, and search inside for "irrelevant content" - amazingly the result is only one hit).
• Quotation of the day:
Ralph W. Emerson, in Society and Solitude [Vol 7], Chapter VII: Works and Days
* It turned out my husband knows him personally. It's a small world...
Sunday, July 22, 2007
GZK cutoff confirmed
In an earlier post, Bee explained the physics behind the GZK (Greisen, Zatsepin and Kuzmin) cutoff: protons traveling through outer space will - when their energy crosses a certain threshold - no longer experience the universe as transparent. If their energy is high enough, the protons can scatter with the omnipresent photons of the Cosmic Microwave Background, and create pions. As a result, their mean free paths drops considerably and only very little of them are expected to reach earth. This threshold for photopion production for ultra high energetic protons is known as the GZK cutoff.
The presence of this cutoff had been observed by the HiRes cosmic ray array (Observation of the GZK Cutoff by the HiRes Experiment, arXiv:astro-ph/0703099), but had been disputed by the results from the Japanese detector AGASA (Akeno Giant Air Shower Array) which caused excitement when it failed to see the cut-off in data obtained up to 2004. A third experiment, the Pierre Auger Observatory on the plains of the Pampa Amarilla in western Argentina, which started taking data last year, now settled the question:
"If the AGASA had been correct, then we should have seen 30 events [at or above 1020 eV], and we see two," says Alan Watson, a physicist from the University of Leeds, U.K., and spokesperson for the Auger collaboration [source]. According to Watson, the data also suggests that these highest energy rays comprise protons and heavier nuclei, the latter of which don't feel the GZK drag.
The results were announced on the 30th International Cosmic Ray Conference in Merida, Yucatan, Mexico, and had a brief mentioning in Nature. The Nature article also points out that there is prospect of identifying the regions of the sources of the highest energetic particles, but these data are preliminary. "Unless I talk in my sleep, even my wife doesn't know what these regions are", as Watson was quoted in Nature.
And of course, now that there is new data, somebody is around to claim one needs an even larger experiment to understand it: "Now we understand that above the GZK cutoff there are ten times less cosmic rays than we thought 10 years ago, so we may need a detector ten times as big as Auger," says Masahiro Teshima of the Max Planck Institute for Physics in Munich, Germany, who worked on AGASA and is working on the Telescope Array [source].
The recent paper by the Pierre Auger collaboration with more details was on the arxiv last week:
The UHECR spectrum measured at the Pierre Auger Observatory and its astrophysical implications
T.Yamamoto, for the Pierre Auger Collaboration, arXiv:0707.2638
Abstract: The Southern part of the Pierre Auger Observatory is nearing completion, and has been in stable operation since January 2004 while it has grown in size. The large sample of data collected so far has led to a significant improvement in the measurement of the energy spectrum of UHE cosmic rays over that previously reported by the Pierre Auger Observatory, both in statistics and in systematic uncertainties. We summarize two measurements of the energy spectrum, one based on the high-statistics surface detize. The large sample of data collected so far has led to a significant improvement in the measurement of the energy spectrum of UHE cosmic rays over that previously reported by the Pierre Auger Observatory, both in statistics and in systematic uncertainties. We summarize two measurements of the energy spectrum, one based on the high-statistics surface detector data, and the other of the hybrid data, where the precision of the fluorescence measurements is enhanced by additional information from the surface array. The complementarity of the two approaches is emphasized and results are compared. Possible astrophysical implications of our measurements, and in particular the presence of spectral features, are discussed.
The upper end of the cosmic ray energy spectrum as measured by the Pierre Auger Observatory: The black dots represent data points, the blue and red curves are expectations derived from different models for the composition and energy distribution of the cosmic ray particles, all based on well-established physics including the GZK cutoff mechanism. Two events cannot be understood as stemming from protons, but may well be explained by heavier nuclei. (Figure from T. Yamamoto, The UHECR spectrum measured at the Pierre Auger Observatory and its astrophysical implications, ICRC'07; Credits: Auger Collaboration, technical information)
More plots and data can be found on the websites of the Pierre Auger Observatory.
Saturday, July 21, 2007
The LHC at Nature Insight
With less than a year to go before the start of the Large Hadron Collider at CERN, there has been a lot of media coverage about this huge collider lately - see e.g. at NYT, The New Yorker, and of course Bee's post The World's Largest Microscope.
Much more in-depth information on the physics, the history, and the engineering aspects of the LHC can be found in this week's Nature Insight: The Large Hadron Collider. Unfortunately, a subscription is required for the full content, but two interesting articles are freely available:
How the LHC came to be, by former CERN Director-General Chris Llewellyn Smith, on the political and organisational struggles involved with the building such an international, multi-billion euro machine, and Beyond the standard model with the LHC, by CERN theorist John Ellis (the guy with the penguins - see page 5), on the different options on possible new physics that might be discovered at the LHC.
Have a nice weekend!
Wednesday, July 18, 2007
Phenomenological Quantum Gravity
[This is the promised brief write-up of my talk at the Loops '07 in Morelia, slides can be found here, some more info about the conference here and here.
When I submitted the title for this talk, I actually expected a reply saying "Look. This is THE international conference on Quantum Gravity. We already have ten people speaking about phenomonelogy - could you be a bit more precise here?". But instead, I found myself joking I am the phenomenology of the conference. Therefore, I added a somewhat extended motivation to my talk which I found blog-suitable, so here it is.]
The standard model (SM) of particle physics [1] is an extremely precise theory and has demonstrated its predictive power over the last decades. But it has also left us with several unsolved problems, question that can not be answered - that can not even be addressed within the SM. There are the mysterious whys: why three families, three generations, three interactions, three spatial dimensions? Why these interactions, why these masses, and these couplings? There are the cosmological puzzles, there is dark matter and dark energy. And then there is the holy grail of quantum gravity (see also: my top ten unsolved physics problems).
There are two ways to attack these problems. The one is a top-down approach. Stating with a promising fundamental theory one tries to reach common ground and to connect to the standard model from a reductionist approach. The difficulty with this approach is that not only one needs that 'promising candidate for the fundamental theory', but most often one also has to come up with a whole new mathematical framework to deal with it. Most of the talks on the conference [2] were top down approaches. The other way is to start from what we know and extend the SM in a constructivist approach. Examples for that might be to take the SM Lagrangian and just add all kinds of higher order operators, thereby potentially giving up symmetries we know and like. The difficulty with this approach is to figure out what to do with all these potential extensions, and how to extract sensible knowledge about the fundamental theory from it.
I like it simple. Indeed, the most difficult thing about my work is how to pronounce 'phenomenology' (and I've practiced several years to manage that). So I picture myself somewhere in the middle. People have called that 'effective models' or 'test theories'. Others have called it 'cute' or 'nonsense'. I like to call it 'top-down inspired bottom-up approaches'. That is to say, I take some specific features that promising candidates for fundamental theories have, add them to the standard model and examine the phenomenology. Typical examples are e.g. just asking what the presence of extra dimensions lead to. Or the presence of a minimal length. Or a preferred reference frame. You might also examine what consequences it would have if the holographic principle or entropy bounds would hold. Or whether stochastic fluctuations of the background geometry would have observable consequences.
These approaches do not claim to be a fundamental theory of their own. Instead, they are simplified scenarios, suitable to examine certain features as to whether their realization would be compatible with reality. These models have their limitations, they are only approximations to a full theory. But to me, in a certain sense physics is the art of approximation. It is the art of figuring out what can be neglected, it is the art of building models, and the art of simplification.
"Science may be described as the art of systematic over-simplification."
~Karl Popper
One can imagine more beyond the standard model than just QG! So, if we are talking about phenomenology of quantum gravity we'll have to ask what we actually mean with that. To me, quantum gravity is the question how we can reconcile the apparent disagreements between classical General Relativity (GR) and QFT. And I say 'apparent' because nature knows how quantum objects fall, so there has to be a solution to that problem [3]. To be honest though, we don't even know that gravity is quantized at all.
I carefully state we don't 'know' because we've no observational evidence for gravity to be quantized whatsoever. (The fact that we don't understand how a quantized field can be coupled to an unquantized gravitational field doesn't mean it's impossible.) Indeed one can be sceptical about whether it's observable at all. This is reflected very aptly in the below quotation from Freeman Dyson, which I think is deliberately provocative and basically says my whole field of work doesn't exist:
"According to my hypothesis, the gravitational field described by Einstein's theory of general relativity is a purely classical field without any quantum behavior [...] If this hypothesis is true, we have two separate worlds, the classical world of gravitation and the quantum world of atoms, described by separate theories. The two theories are mathematically different and cannot be applied simultaneously. But no inconsistency can arise from using both theories, because any differences between their predictions are physically undetectable."
~Freeman Dyson [Source]
Well. Needless to say, I do think there there is phenomenology of QG that is in principle observable, even though we might not yet be able to observe it. And I do think that observing it will lead us a way to QG.
However, there are various scenarios that could be realized at Planckian energies. Gravity could be quantized within one or the other approach. Also, higher order terms in classical gravity could become important. Or, there could be semi-classical effects coming into the game. Now one tries to take some insights from these approaches, leading to the above mentioned phenomenological models. Already here one most often has a redundancy. That is, various scenarios can lead to the same effect. E.g. modified dispersion relations, or the Planck scale being a fundamental limit to our resolution are effects that show up in more than one approach. In addition, there's a second step in which these models are then used to make predictions. Again, various models, even though different, could yield the same predictions. That's what I like to call the 'inverse problem': how can we learn something about the underlying theory of quantum gravity from potential signatures?
In the figure below I stress 'new and old' phenomenology because a sensible model shouldn't only be useful to make new predictions, it should also reproduce all that stuff we know and like. I have a really hard time to take seriously a model that doesn't reproduce the standard model and GR in suitable limits.
Now here are some approaches in this category of 'top down inspired buttom up approaches' that I find very interesting (for some literature, see e.g. this list):
(And possibly we can maybe soon add macroscopic non-locality to that list, an interesting scenario that Fotini, Lee and Chanda are presently looking into.)
However, whenever one works within such a model one has to be aware of its limitations. E.g. the models with large extra dimensions are in my opinion such a case in which has been done what sensibly could be done. And now we'll have to turn on the LHC and see. After the original ideas had been outlined, many people began to build more and more specific models with a lot of extra features. It's not that I don't find that interesting, but it's somewhat besides the point. To me it's like building a house and worrying about the color of the curtains before the first brick has been laid.
Now, all of the approaches I've mentioned above are attempts to get definitive signatures of QG, but so far none of these predictions on its own would be really conclusive. Take e.g. a possible modification of the GZK cutoff - could have been 'new' physics, but not clear which, or maybe just some ununderstood 'old' physics, like the showers not being created by protons from outside our galaxy as generally assumed?
So, my suggestion to make progress in this regard is to construct models that are suitable to investigate observables in varios different areas. In such a way, we could be able to combine predictions and make them more conclusive. Think about the situation with GR at the beginning of the last century: It predicted a perihelion precession of Mercury, but there were other explanations like an additional planet, a quadrupole moment of the sun, or maybe a modification of Newtonian gravity. It took another observable - in this case light deflection by the sun - that was predicted within the same framework, and confirmed GR was the correct description of nature [4]. And please note, a factor 2 mattered here [5].
I personally am very optimistic about the future progress in quantum gravity - and that not only because it's hard to beat Dyson's pessimism. I think it doesn't matter where we start from, may it be a top-down, a buttom-up approach or somewhere in the middle. I also think it doesn't matter which direction each of us starts into. The history of science tells us that there often are various different ways to arrive at the same conclusion. A particularly nice example is how Schrödinger's wave formulation and Heisenberg's matrix approach turned eventually out to be part of the same theory.
I think as long as we listen to what our theories tell us, if we take into account what nature has to say, are willing to redirect our research according to this - and if we don't get lost in distractions along the way, then I think we have good chances to find a way to quantum gravity. And this finally solves the mystery of the quotation on the last slide of my talk:
'The problem is all inside your head' she said to me
The answer is easy if you take it logically
I’d like to help you in your struggle to be free
There must be fifty ways to [quantum gravity]
[1] In my notation the SM includes General Relativity.
[2] The exception being the very recommendable talk on
Effective Quantum Gravity by John F. Donoghue.
[3] Though 3 years living in the US have tought me there's actually no such thing as a 'problem' - it's called a challenge. One just has to like them, eh?
[4] Admittedly, what the measurement actually said was not as straight forward as one would have wished. I leave it to my husband to elaborate on this interesting part of the history of science.
[5] The resulting deviation can be reproduced in the Newtonian approach up to a factor 1/2.
TAGS: , ,
PS on Zeitgeist...
More at
Tuesday, July 17, 2007
AvH's 10 point plan
The Alexander von Humboldt Foundation is the master of science networking among the German non-profit foundations. If you've managed to get one of their scholarships you become part of their brotherhood for a lifetime, including a membership card - Unfortunately I don't know about the secret handshake, since I've never even applied. The largest drawback of their scholarships is that one can only apply to a host who is also a member (Humboldtianer!), which was the reason for me to choose the German Academic Exchange Service (DAAD) instead.
However, I've just found that AvH came up with a ten point plan of recommendations "for making Germany more attractive for international cutting-edge researchers". Their suggestions make a lot of sense to me and I find the press release worth mentioning. Even though some of it (2./7.) addresses specifically German problems, especially the points 9. and 10. apply to many other countries as well, so does 4., and 3. is generally a good idea (that I too have mentioned repeatedly, and in my opinion an issue that will become more important the more complex and global the scientific community becomes). Let us hope that all these pretty word-ideas will have concrete consequences in the not to far future.
For the full text, see here. In brief the points are:
1. More jobs for scientists and scholars
On average, German professors supervise 63 students. This is more than twice as many as the average at top-rank international universities.
2. Academic careers need planning certainty: establishing tenure track as an option for junior researchers
German universities must take measures to plan the career stage between a doctorate and a secure professorship and make it compatible internationally. On the pattern of the Anglo-Saxon tenure track, clear, qualifying steps should be defined at which decisions are made about remaining at an institution.
3. Career support as an advisory and supervisory task of academic managers
Senior academics as well as university and/or institute directors must play an active role in human resources development for their junior researchers. Young scientists and scholars need careers advice.
4. Promoting early independence by taking risks in financing research
By international comparison, young academics in Germany have less scope for decision-making and action. Funding programmes for early, independent research must be strengthened. Especially for researchers at an early stage in their careers, procedures should be profiled for research work involving an unknown risk factor.
5. Making recruitment and appointments more professional
Appointment procedures must have an open outcome and be transparent. To this end, commissions charged with appointments must include external or independent expert reviewers. Good academics should be appointed quickly. Internationally respected universities can no longer afford to take years over appointments, particularly as universities and research establishments now actively have to recruit junior researchers internationally to a much greater extent than they did in the past.
6. Dissolve staff appointment schemes and adapt management structures
Rigid staff appointment schemes must make way for flexible appointment options, or be dissolved. Independent junior research group leaders must be put on a par with junior professors within the universities and in collaborations between universities and non-university research establishments.
7. Creating special regulations for collective wage agreements in the academic sector
According to many of those involved, the new wage agreement for the public service sector is not commensurate with appropriate remuneration for academic and non-academic staff at non-university and university research establishments. By comparison with other pay-scales, it is not competitive, either nationally or internationally, it restricts mobility, and its rigid conditions do not take account of the special features of academic life.
8. Internationally competitive remuneration
It must be ensured that cutting-edge researchers can be offered internationally competitive remuneration. The framework for allocating remuneration to professors currently valid at universities leaves too little scope for this.
9. Internationalising social security benefits
Internationally mobile researchers often have to accept major disadvantages or financial losses with regard to pension rights.
10. Increasing transparency and creating an attractive working environment
• Academic employers in Germany must be put in a position to offer organisational and financial support for removal and relocation which is already the norm in other countries, especially when top-rank academic personnel are appointed.
• Child-care facilities for internationally mobile researchers at universities and non-university research establishments must be expanded quickly and extensively. International appointments in Germany still often fail because there is a lack of child-care facilities.
• Careers advice and support for (marital) partners seeking employment as well as so-called dual career advice or support for academic couples are required to attract internationally mobile researchers. Examples from abroad indicate that this does not necessarily mean concrete job offers ( which are often difficult to find), rather, intelligent counselling can satisfy many people's needs.
Related: See also The LHC Theory Initiative, The Terrascale Alliance, Temporary Display, and Temporary Display - Contd.
... is not only a German word that I've never heard a German actually using [1], but also the title of the new Smashing Pumpkins album. By coincidence, I've been wearing my ancient ZERO shirt last week, so I felt like it was my duty to pick up the CD.
It is an interesting album, but overall very disappointing. To begin with, I never liked Billy Corgan's voice, but if there's no way around it, it definitly goes better with melancholy and infinite sadness than with revolution. I mean, come on, he's composing a song in 2006 titled United States with lyrics saying "fight! I wanna fight! I wanna fight! revolution tonight!" and manages to sing such that it could as well have been about, say, compactification on Calabi Yau manifolds [2].
There are more politically flavored tracks on the album: For God and Country ("it's too late for some, it's too late for everyone") and Doomsday Clock ("it takes an unknown truth to get out, I'm guessing I'm born free, silly me") but the only thing worth mentioning about them is the fact there presently is a market for this. This tells a lot more about the 'Zeitgeist' than the music itself [3].
Most of the tracks on the CD sound extremely similar, drowned in an ever present electric guitar soup and exchangeable melodies. Billy Corgan is at his best with the slower and more thoughtful titles like e.g. Neverlost ("If you think just right, if you'll love you'll find, certain truths left behind").
Favourite tracks from previous albums: Disarm, To Sheila, Bullet with Butterfly Wings, 1979
[1] My husband proudly reports he can testify at least one incident in which one of his uncles, a Prof. for theology and philosophy, successfully used the word.
[2] That's why I call it a science blog.
[3] And while I am at it: the German 'ei' is pronounced like the English 'I' (or the beginning of the word 'aisle') in both places (whereas the German 'i' is pronounced like the English 'ee'). The German 'Z' is pronounced close to 'ts'. That is with 'Tsaitgaist', you'll make yourself understood better than with 'seetgeest'.
TAGS: , ,
Monday, July 16, 2007
What's new?
Nothing. Well, almost nothing.
• I dyed my hair. The color is galled 'ginger'. I'd have called it pumpkin. It actually looks like foul apricots. Say of the day so far 'What happened to your hair?' - 'It's an allergic reaction.' - 'To what?' - 'Stupid questions.' (As one can easily deduce, my conversation partner in this case obviously was not Canadian.)
• Though the plan was this year it would not be necessary to pack my household into boxes and drag them around, I will actually be moving twice before the end of the year. Don't ask. At least I am staying in town.
• My last plant, which suffered significantly during my previous trip, has surprisingly recovered (well, at least half of it), and is so not looking forward to my upcoming trip. This is to warn you that I'll be flying to Europe on Thursday, and be off and away for a while.
• I've found six degrees of freedom.
• I just saw this paper on the arxiv:
Search for Future Influence from L.H.C
By Holger B. Nielsen, Masao Ninomiya
Abstract: We propose an experiment which consists of pulling a card and use it to decide restrictions on the running of L.H.C. at CERN, such as luminosity, beam energy, or total shut down. The purpose of such an experiment is to look for influence from the future, backward causation. Since L.H.C. shall produce particles of a mathematically new type of fundamental scalars, i.e. the Higgs particles, there is potentially a chance to find hitherto unseen effects such as influence going from future to past, which we suggest in the present paper.
which features the idea that the nature of the Higgs field is such that it attempts to avoid its own production: "When the Higgs particle shall be produced, we shall retest if there could be influence from the future so that, for instance, the potential production of a large number of Higgs particles in a certain time development would cause a pre-arrangement so that the large number of Higgs productions, should be avoided."
Therefore - if this hypothesis is true - the LHC is likely to suffer an accident and has to be shut down. The argument is supported by the cancellation of the Superconducting Supercollider: "Thus it is really not unrealistic that precisely at the first a large number of Higgs production also our model-expectations that is influence from the future would show up. Very interestingly in this connection is that the S.S.C. in Texas accidentally would have been the first machine to produce Higgs on a large scale. However it were actually stopped after a quarter of the tunnel were built, almost a remarkable piece of bad luck."
The authors therefore propose to give backwards causation an economically less damaging possibility to avoid Higgs production by means of a card game that settles runs for the LHC, and permits for the possibility to shut down completely in a quiet and undesastrous way.
One should take this very seriously: "It must be warned that if our model were true and no such game about restricting strongly L.H.C. were played [...] then a “normal” (seemingly accidental) closure should occur. This could be potentially more damaging than just the loss of L.H.C. itself. Therefore not performing [...] our card game proposal could - if our model were correct - cause considerable danger."
I find this interesting as it gives a completely new spin to postdiction. See, we now can have a theory that disables its own observability by backward causation. So, one can actually post-dict something before it has happened, and then go back into the future. Makes me wonder though why the universe hasn't disabled itself even before nucleosynthesis. Maybe God doesn't playing dice with the universe, but instead card games?
• Have a good start into the week!
Saturday, July 14, 2007
First Light for the Gran Telescopio Canarias
Last night, the Gran Telescopio Canarias (GTC) at the Observatorio del Roque de los Muchachos of the European Northern Observatory (ENO) in La Palma, Canary Islands, Spain, saw its "First Light". The first star observed was Tycho 1205081, close to Polaris - a bit more photogenic is this shot of the pair of interacting galaxies UGC 10923 with extended star formation regions, taken with an exposure time of 50 seconds:
Interacting galaxies UGC 10923 seen with the eyes of the World's largest telescope (Credits: Gran Telescopio Canarias, Instituto de Astrofisica de Canarias)
The primary mirror of the new telescope consists is made up of 36 separate, hexagonal segments, fabricated at the Glaswerke Schott in Mainz, just around the corner from Frankfurt. Taken together, the segments have a light-collecting surface of 75.7 m2, which corresponds the a circular mirror with a diameter of 10.4 metres. At this size, it is the currently largest telescope for optical and near-infrared light!
The Gran Telescopio Canarias in La Palma, Canary Isles, in September 2006 (Credits: GTC project webcam)
This was in the news these days here (see e.g.,, or Le Monde), but the European Northern Observatory somehow has managed to issue a press release only in Spanish, so I am a bit at loss to find more details. Actually, the report in the FAZ is very good, and recalls the developments that lead to the construction of these huge telescopes:
I remember from the popular astronomy book I read as a kid that at that time the 5-metre mirror of the Mount Palomar telescope was thought to be the endpoint of the growth of telescope mirror size: Larger solid mirrors are to heavy and deform when the telescope is moved, and moreover, the image gets blurred anyway by the distortions caused to the light as it passes through the atmosphere. As a case in point, a 6-metre telescope in the Soviet Union was mentioned, which produced pictures of not as high a quality as expected from its size. I was quite disappointed when I read that.
Fortunately, both obstacles could be overcome with new technologies first realised in the 1990s: Active Optics, which means that the mirror is always kept in perfect shape by an array of motors and can therefore be lightweight, and large, and Adaptive Optics, which manages to compensate for the fluctuations of the density of air and allows for a seeing nearly as good as in space.
Among the big optical telescopes using these techniques - the Keck, Subaru and Gemini-North telescopes in Hawaii, the four mirrors of the Very Large Telescope and the Gemini-South telescope in Chile, the Large Binocular Telescope in Arizona, the Hobby-Eberly-telescope in Texas, and the South African Large Telescope in the South African Karoo - the Gran Telescopio Canarias is currently the largest one.
The good news is that all these telescopes will continue to take great shots of the Universe for the professionals and for armchair astronomers like me, even when the Hubble Space Telescope will once have stopped working.
Potentially Insane
If you have a look at the sidebar, you'll see that even the internet is presently bored! Here is what PI residents do when they go bonkers.
PI stands for... Probably Improbable, Politically Incorrect, Potentially Insane, Preon Infected, Problems Included, Proudly Ignorant, Promising Insults, Positively Irrational, Presently Insignificant, Philosophical Illusions, Physics Inside
Contributed submissions:
Promoting Ideas, Prain Included, Pump It, Plotting Infinity, Position Independent, Pissing Ion, Perfectly Intolerant, Protecting Insanity, Post Inflation, Plutonium Injection, Pain Intensifier, Premature Interruption, Positive Impact, Private Intrusion
And here is what Wikipedia had to add, see PI (disambiguation):
Primitive Instinct (sometimes), Public Intoxication (definitly), People's Initiative (more than useful), Principal Investigator (haven't seen one), Primary Immunodeficiency (not yet), Predictive Index (none), Provider Independent (that's what I dream of), Pass Interference (my job), Programmed Instruction (absent)
My apologies to the whole public outreach department. I expect a sentence of 4 months snow.
See also: 3.141592653589793238462...
Thursday, July 12, 2007
I once read a science fiction about the not-too far future. Our planet's flora became fed up with mankind, and decided to strike back. It began with plumbing problems - tree's roots destroying pipes, went on to grass breaking through the pavement and ivy growing over houses. I have to think about this each time when I see a tree causing cracks in a walkway, or grass growing in every possible and impossible place.
Tuesday, July 10, 2007
Shrinking Earth
No, this is not about a resuscitation of old ideas about the history of planet Earth, but these days I could learn that the Earth Is Smaller Than Assumed, according to geodesist from the University of Bonn who have discovered that the blue planet is really smaller than originally thought. Well - not really, I would say: these guys are talking about 5 millimetre, or 0.2 inch.
Anyway, this accurate result is really impressive! It results from the combined analysis of radio signals from distant quasars, observed by a worldwide net of more than 70 radio telescopes. Characteristic features in the radio signals from quasars are received at slightly different times at different places on Earth, and the combination of these measurements using the technique of Very Long Baseline Interferometry allows a very precise determination of the relative distance of the radio telescopes: These relative distances can be deduced up to 2 millimetre on 1000 km, or up to 2 parts per billion (ppb). From the network of radio telescopes distributed all around the globe, it is possible to calculate its dimension very precisely. This analysis, accomplished with improved precision over previous similar work by the Bonn geodesist, yields a diameter of the Earth 5 millimetre smaller than supposed so far. According to a report in the New Scientist about this result, the total diameter of the Earth at the equator is around 12,756.274 kilometres (7,926.3812 miles).
Axel Nothnagel of the University of Bonn, who heads the team that provided new and more accurate data about the diameter of the Earth. (Credits: University of Bonn Press Release, July 5, 2007, Frank Luerweg)
A propos shrinking Earth: Earth was shrinking by a huge step, in a metaphorical way, 45 years ago today, as I heard this morning on the radio: On July 10, 1962, TELSTAR was launched from Cape Canaveral, the first communications satellite which allowed live TV broadcast between Europe and North America, bridging by the speed of light a distance that is steadily growing by 18 millimetre per year...
The TELSTAR communications satellite, launched 45 years ago today (Source: Wikipedia on Telstar)
PS: The paper by the Axel Nothnagel team is: The contribution of Very Long Baseline Interferometry to ITRF2005, by Markus Vennebusch, Sarah Böckmann and Axel Nothnagel, Journal of Geodesy 81 (2007) 553-564, DOI: 10.1007/s00190-006-0117-x. If someone can tell me where I can find the 5 millimetre in that paper, I am very grateful ;-)
Today on the Arxiv
Today I came across this very entertaining paper
Hollywood Blockbusters: Unlimited Fun but Limited Science Literacy
By C.J. Efthimiou, R.A. Llewellyn
Abstract: In this article, we examine specific scenes from popular action and sci-fi movies and show how they blatantly break the laws of physics, all in the name of entertainment, but coincidentally contributing to science illiteracy.
I didn't even know there is an arxiv for Physics and Society. The authors conclude with
"Hollywood is reinforcing (or even creating) incorrect scientific attitudes that can have negative results for the society. This is a good reason to recommend that all citizens be taught critical thinking and be required to develop basic science and quantitative literacy."
It's hard to disagree with that recommendation, even without reading the paper. Though I have to say if somebody has the scientific attitude he might survive a jump from the 15th floor, I guess natural selection will take care of that. For most cases I think we've all been taught from earliest childhood on not to mix up fiction with reality... That is, except for those of us who end up in theoretical physics, involuntarily or on purpose bending and breaking the laws of nature on our notebooks.
Update: See also The Physics of Nonphysical Systems.
Monday, July 09, 2007
Monday Links
In case you're just sitting at breakfast looking for a good read:
Sunday, July 08, 2007
The LHC Theory Initiative
Want proof that the grass is always greener on the other side? I just read this article
Refilling the Physicist Pool
about the LHC theory initiative:
"We are behind the Europeans, and we believe very strongly that we shouldn't just leave this work to the Europeans," Baur said in a UB statement. [...]
Funding in the US for particle physics as a whole and theoretical particle physics in particular has declined significantly over the past 15 years, Baur said. In addition, physics departments in US universities tend to hire faculty members who develop innovative ideas, whereas in Europe, the physics culture puts equal emphasis on novel research and solid calculations that help advance the field as a whole. But with the Large Hadron Collider -- the world's largest particle accelerator -- coming online in the next year or sooner, Baur said, the US cannot afford to fall behind."
It's interesting that in the US ideas are 'innovative' whereas in Europe they are 'novel' (especially since both refers to a field that is several decades old, and hasn't seen very much novelty lately). Admittedly, I find the perspective of a 'physics culture' that produces 'solid' Next-to-next-to-next-to-next-to leading order calculations somewhat depressing.
For German counterpart, see also the Terrascale Alliance.
Saturday, July 07, 2007
I spent half of the day trying to sort through all that stuff which has accumulated on my desk while I was away. My efforts where impressively unsuccessful. The only thing that came out of this was the poem below. I think I'll go for a walk, buy a lighter and then give it a second try.
Cardboard boxes, paper piles,
Unread books, and many files,
Coffee cups and empty cans,
Post-its, trash and broken pens.
Unpaid bills, forgotten friends,
Pieces, broken in my hands,
Wedding photos in between
Notebooks and a magazine.
Plastic plants, a moving box,
And a pair of unmatched socks,
Unfinished, and missing pieces,
Leave me wondering where peace is.
[For more, check my website]
... I actually think I have a lighter... if only I could find it... what a mess!
Friday, July 06, 2007
It's all about sex...
... yes, we already knew that. Men are intelligent to impress women, and women are intelligent to find the best men. That's why you're sitting on your desk, chewing a pen, trying to quantize gravity.
Here's what Psychology tells us today (Source: Ten Politically Incorrect Truths About Human Nature, by Alan S. Miller and Satoshi Kanazawa):
Well, and once you've destroyed a civilization and sufficiently impressed every women that was 'fit' enough to survive, keep in mind that by your human nature you are actually polygamous because it's an evolutionary advantage:
And I'm sure 6 feet 4 also come in handy for changing light-bulbs. On the other hand, there are certain natural selection mechanism in societies which tolerate polygamy. As you'll also learn from the above article, suicide terrorists are dominantly Muslim because a) polygamy increases competition among men and b) because they are promised 72 virgins in heaven. (If only things were that simple. I still think airline passengers should stroke pigs before boarding, definitly preferable to throwing away my Coke each time I go through security.)
Also, sorry to report, but having children is statistically seen a bad idea for men when it comes to the peak of the crime-and-creativity curve:
I especially like the part with 'they don't know why'. And finally, a Harvard professor solved the puzzle why men prefer D-cups:
Well, I think there's truth in it, as my age seems to be incredibly hard to judge. Related, you'll be interested to hear that a recent study shows Women Don't Talk More Than Guys:
"The researchers placed microphones on 396 college students for periods ranging from two to 10 days, sampled their conversations and calculated how many words they used in the course of a day. The score: Women, 16,215. Men, 15,669.The difference: 546 words: "Not statistically significant," say the researchers."
Have a nice weekend. Have fun. Reproduce. Go, discover a new country or write a sonnet.
Thursday, July 05, 2007
The Planck Scale
The Planck scales - a length and a mass* - indicate the limits in which we expect quantum gravitational effects to become important
Gravity coupled to matter requires a coupling constant G that has units of length over mass. One finds the Planck scale if one lets quantum mechanics come into the game. For this, let us consider a quantum particle of a (so far unknown) mass mp with a Compton wavelength lp, the relation between both given by the Planck constant
This is the quantum input. Now consider that particle to be as localized as it is possible taking into account its quantum properties. That is, the mass mp is localized within a space-time region with extensions given by the particle's own Compton wavelength. The higher the mass of that particle, the smaller the wavelength. However, we know that General Relativity says if we push a fixed amount of mass together in a smaller and smaller region, it will eventually form a black hole. More general, one can ask when the perturbation of the metric that this particle causes will be of order one:
which then can be solved for the mass, and subsequently for the length scale we were looking for. If one puts in some numbers one finds
These Planck scales thus indicate the limit in which the quantum properties of our particle will cause a non-negligible perturbation of the space-time metric, and we really have to worry about how to reconcile the classical with the quantum regime. Compared to energies that can be reached at the collider (the LHC will have a center of mass energy of the order 10 TeV), the Planck mass is huge. This reflects the fact that the gravitational force between elementary particles is very weak compared to the the other forces that we know, and this is what makes it so hard to experimentally observe quantum gravitational effect.
Max Planck introduced these quantities in 1899, the paper (it's in German) is available online
(Credits to Stefan for finding it). You'll find the natural mass scales introduced on page 479ff. He didn't call them 'Planck' scales then, and it is also interesting why he found them useful to introduce, namely because the aliens would also use them
"It is interesting to note that with the help of the [above constants] it is possible to introduce units [...] which [...] remain meaningful for all times and also for extraterrestrial and non-human cultures, and therefore can be understood as 'natural units'."
Coincidentally, yesterday I saw a paper on the arxiv
What is Special About the Planck Mass?
By C. Sivaram
Abstract: Planck introduced his famous units of mass, length and time a hundred years ago. The many interesting facets of the Planck mass and length are explored. The Planck mass ubiquitously occurs in astrophysics, cosmology, quantum gravity, string theory, etc. Current aspects of its implications for unification of fundamental interactions, energy dependence of coupling constants, dark energy, etc. are discussed.
which gives a nice introduction into the appearances of various mass scales in physics, with some historical notes.
* With the speed of light set to be equal 1, in which case a length is the same as a time. It you find that confusing, just define a Planck time by dividing the length through the speed of light. |
0138fae2a1ece138 |
Chapter 9: Scattering in One Dimension
We now consider another one-dimensional problem, the scattering problem. In doing so we need to consider scattering-type solutions and what they mean. For standard scattering situations, the wave functions we use are usually those valid for regions of constant potential energy such as complex exponentials (plane waves) when E > V0 and real exponentials when E < V0.1
Table of Contents
1There is one other possibility that is not often considered. If E = V0, the solution to the Schrödinger equation yields a linear solution.
Quantum Theory TOC
Overview TOC
The OSP Network:
Open Source Physics - Tracker - EJS Modeling
Physlet Physics
Physlet Quantum Physics |
8b2d94fb477851e6 | In quantum physics, you can decouple systems of particles that you can distinguish — that is, systems of identifiably different particles — into linearly independent equations. To illustrate this, suppose you have a system of many different types of cars floating around in space. You can distinguish all those cars because they’re all different — they have different masses, for one thing.
Now say that each car interacts with its own potential — that is, the potential that any one car sees doesn’t depend on any other car. That means that the potential for all cars is just the sum of the individual potentials each car sees, which looks like this, assuming you have N cars:
Being able to cut the potential energy up into a sum of independent terms like this makes life a lot easier. Here’s what the Hamiltonian looks like:
Notice how much simpler this equation is than this Hamiltonian for the hydrogen atom:
Note that you can separate the previous equation for the potential of all cars into N different equations:
And the total energy is just the sum of the energies of the individual cars:
And the wave function is just the product of the individual wave functions:
except it stands for a product of terms, not a sum, and ni refers to all the quantum numbers of the ith particle.
As you can see, when the particles you’re working with are distinguishable and subject to independent potentials, the problem of handling many of them becomes simpler. You can break the system up into N independent one-particle systems. The total energy is just the sum of the individual energies of each particle. The Schrödinger equation breaks down into N different equations. And the wave function ends up just being the product of the wave functions of the N different particles.
Take a look at an example. Say you have four particles, each with a different mass, in a square well. You want to find the energy and the wave function of this system. Here’s what the potential of the square well looks like this for each of the four noninteracting particles:
Here’s what the Schrödinger equation looks like:
You can separate the preceding equation into four one-particle equations:
The energy levels are
And because the total energy is the sum of the individual energies is
the energy in general is
So here’s the energy of the ground state — where all particles are in their ground states, n1 = n2 = n3 = n4 = 1:
For a one-dimensional system with a particle in a square well, the wave function is
The wave function for the four-particle system is just the product of the individual wave functions, so it looks like this:
For example, for the ground state, n1 = n2 = n3 = n4 = 1, you have
So as you can see, systems of N independent, distinguishable particles are often susceptible to solution — all you have to do is to break them up into N independent equations. |
0b14c1d8051427ba | Introduction to special relativity
From Wikipedia, the free encyclopedia
(Redirected from Special relativity for beginners)
Jump to: navigation, search
This article is a non-technical introduction to the subject. For the main encyclopedia article, see Special relativity.
Albert Einstein during a lecture in Vienna in 1921.
In physics, special relativity is a fundamental theory concerning space and time, developed by Albert Einstein in 1905[1] as a modification of Galilean relativity. (See "History of special relativity" for a detailed account and the contributions of Hendrik Lorentz and Henri Poincaré.) The theory was able to explain some pressing theoretical and experimental issues in the physics of the time involving light and electrodynamics, such as the failure of the 1887 Michelson–Morley experiment, which aimed to measure differences in the relative speed of light due to the Earth's motion through the hypothetical, and now discredited, luminiferous aether. The aether was then considered to be the medium of propagation of electromagnetic waves such as light.
Einstein postulated that the speed of light in free space is the same for all observers, regardless of their motion relative to the light source, where we may think of an observer as an imaginary entity with a sophisticated set of measurement devices, at rest with respect to itself, that perfectly records the positions and times of all events in space and time. This postulate stemmed from the assumption that Maxwell's equations of electromagnetism, which predict a specific speed of light in a vacuum, hold in any inertial frame of reference[2] rather than, as was previously believed, just in the frame of the aether. This prediction contradicted the laws of classical mechanics, which had been accepted for centuries, by arguing that time and space are not fixed and in fact change to maintain a constant speed of light regardless of the relative motions of sources and observers. Einstein's approach was based on thought experiments, calculations, and the principle of relativity, which is the notion that all physical laws should appear the same (that is, take the same basic form) to all inertial observers. Today, the result is that the speed of light defines the metre as "the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second."[3] This relates that the speed of light is by convention 299 792 458 m/s (approximately 1.079 billion kilometres per hour, or 671 million miles per hour).
The predictions of special relativity are almost identical to those of Galilean relativity for most everyday phenomena, in which speeds are much lower than the speed of light, but it makes different, non-obvious predictions for objects moving at very high speeds. These predictions have been experimentally tested on numerous occasions since the theory's inception and were confirmed by those experiments.[4] The major predictions of special relativity are:
• Relativity of simultaneity: Observers who are in motion with respect to each other may disagree on whether two events occurred at the same time or one occurred before the other.
• Time dilation (An observer watching two identical clocks, one moving and one at rest, will measure the moving clock to tick more slowly)
• Relativistic mass
• Length contraction (Along the direction of motion, a rod moving with respect to an observer will be measured to be shorter than an identical rod at rest), and
• The equivalence of mass and energy (written as E = mc2).
Special relativity predicts a non-linear velocity addition formula which prevents speeds greater than that of light from being observed. In 1908, Hermann Minkowski reformulated the theory based on different postulates of a more geometrical nature.[5] This approach considers space and time as being different components of a single entity, the spacetime, which is "divided" in different ways by observers in relative motion. Likewise, energy and momentum are the components of the four-momentum, and the electric and magnetic field are the components of the electromagnetic tensor.
As Galilean relativity is now considered an approximation of special relativity valid for low speeds, special relativity is considered an approximation of the theory of general relativity valid for weak gravitational fields. General relativity postulates that physical laws should appear the same to all observers (an accelerating frame of reference being equivalent to one in which a gravitational field acts), and that gravitation is the effect of the curvature of spacetime caused by energy (including mass).
Reference frames and Galilean relativity: a classical prelude[edit]
A reference frame is simply a selection of what constitutes a stationary object. Once the velocity of a certain object is arbitrarily defined to be zero, the velocity of everything else in the universe can be measured relative to that object.[Note 1]
One oft-used example is the difference in measurements of objects on a train as made by an observer on the train compared to those made by one standing on a nearby platform as it passes.
Consider the seats on the train car in which the passenger observer is sitting.
The distances between these objects and the passenger observer do not change. Therefore, this observer measures all of the seats to be at rest, since he is stationary from his own perspective.
An observer standing on the platform would see exactly the same objects but interpret them very differently. The distance between the platform observer and the seats on the train car is changing, and so the platform observer concludes that the seats are moving forward, as is the whole train. Thus for one observer the seats are at rest, while for the other the seats are moving, and both are correct, since they are using different definitions of "at rest" and "moving". Each observer has a distinct "frame of reference" in which velocities are measured, the rest frame of the platform and the rest frame of the train – or simply the platform frame and the train frame.
Why can't we select one of these frames to be the "correct" one? Or more generally, why is there not a frame we can select to be the basis for all measurements, an "absolutely stationary" frame?
Aristotle imagined the Earth lying at the centre of the universe (the geocentric model), unmoving as other objects moved about it. In this worldview, one could select the surface of the Earth as the absolute frame. However, as the geocentric model was challenged and finally fell in the 1500s, it was realised that the Earth was not stationary at all, but both rotating on its axes as well as orbiting the Sun. In this case the Earth is clearly not the absolute frame. But perhaps there is some other frame one could select, perhaps the Sun's?
Galileo challenged this idea and argued that the concept of an absolute frame, and thus absolute velocity, was unreal; all motion was relative. Galileo gave the common-sense "formula" for adding velocities: if
1. particle P is moving at velocity v with respect to reference frame A and
2. reference frame A is moving at velocity u with respect to reference frame B, then
3. the velocity of P with respect to B is given by v + u.
In modern terms, we expand the application of this concept from velocity to all physical measurements – according to what we now call the Galilean transformation, there is no absolute frame of reference. An observer on the train has no measurement that distinguishes whether the train is moving forward at a constant speed, or the platform is moving backwards at that same speed. The only meaningful statement is that the train and platform are moving relative to each other, and any observer can choose to define what constitutes a speed equal to zero. When considering trains moving by platforms it is generally convenient to select the frame of reference of the platform, but such a selection would not be convenient when considering planetary motion and is not intrinsically more valid.
One can use this formula to explore whether or not any possible measurement would remain the same in different reference frames. For instance, if the passenger on the train threw a ball forward, he would measure one velocity for the ball, and the observer on the platform another. After applying the formula above, though, both would agree that the velocity of the ball is the same once corrected for a different choice of what speed is considered zero. This means that motion is "invariant". Laws of classical mechanics, like Newton's second law of motion, all obey this principle because they have the same form after applying the transformation. As Newton's law involves the derivative of velocity, any constant velocity added in a Galilean transformation to a different reference frame contributes nothing (the derivative of a constant is zero).
This means that the Galilean transformation and the addition of velocities only apply to frames that are moving at a constant (relative) velocity. Since objects tend to retain their current velocity due to a property we call inertia, frames that refer to objects with constant speed are known as inertial reference frames. The Galilean transformation, then, does not apply to accelerations, only velocities, and classical mechanics is not invariant under acceleration. This mirrors the real world, where acceleration is easily distinguishable from smooth motion in any number of ways. For example, if an observer on a train saw a ball roll backward off a table, he would be able to infer that the train was accelerating forward, since the ball remains at rest unless acted upon by an external force. Therefore, the only explanation is that the train has moved underneath the ball, resulting in an apparent motion of the ball. Addition of a time-varying velocity, corresponding to an accelerated reference frame, changed the formula (see pseudo-force).
Both the Aristotelian and Galilean views of motion contain an important assumption. Motion is defined as the change of position over time, but both of these quantities, position and time, are not defined within the system. It is assumed, explicitly in the Greek worldview, that space and time lie outside physical existence and are absolute even if the objects within them are measured relative to each other. The Galilean transformations can only be applied because both observers are assumed to be able to measure the same time and space, regardless of their frames' relative motions. So in spite of there being no absolute motion, it is assumed there is some, perhaps unknowable, absolute space and time.
Classical physics and electromagnetism[edit]
Through the era between Newton and around the start of the 20th century, the development of classical physics had made great strides. Newton's application of the inverse square law to gravity was the key to unlocking a wide variety of physical events, from heat to light, and calculus made the direct calculation of these effects tractable. Over time, new mathematical techniques, notably the Lagrangian, greatly simplified the application of these physical laws to more complex problems.
As electricity and magnetism were better explored, it became clear that the two concepts were related. Over time, this work culminated in Maxwell's equations, a set of four equations that could be used to calculate the entirety of electromagnetism. One of the most interesting results of the application of these equations was that it was possible to construct a self-sustaining wave of electrical and magnetic fields that could propagate through space. When reduced, the mathematics demonstrated that the speed of propagation was dependent on two universal constants, and their ratio was the speed of light. Light was an electromagnetic wave.
Under the classic model, waves are displacements within a medium. In the case of light, the waves were thought to be displacements of a special medium known as the luminiferous aether, which extended through all space. This being the case, light travels in its own frame of reference, the frame of the aether. According to the Galilean transform, we should be able to measure the difference in velocities between the aether's frame and any other – a universal frame at last.
Designing an experiment to actually carry out this measurement proved very difficult, however, as the speeds and timing involved made accurate measurement difficult. The measurement problem was eventually solved with the Michelson–Morley experiment. To everyone's surprise, no relative motion was seen. Either the aether was travelling at the same velocity as the Earth, difficult to imagine given the Earth's complex motion, or there was no aether. Follow-up experiments tested various possibilities, and by the start of the 20th century it was becoming increasingly difficult to escape the conclusion that the aether did not exist.
These experiments all showed that light simply did not follow the Galilean transformation. And yet it was clear that physical objects emitted light, which led to unsolved problems. If one were to carry out the experiment on the train by "throwing light" instead of balls, if light does not follow the Galilean transformation then the observers should not agree on the results. Yet it was apparent that the universe disagreed; physical systems known to be at great speeds, like distant stars, had physics that were as similar to our own as measurements allowed. Some sort of transformation had to be acting on light, or better, a single transformation for both light and matter.
The development of a suitable transformation to replace the Galilean transformation is the basis of special relativity.
Invariance of length: the Euclidean picture[edit]
Pythagoras' theorem
In special relativity, space and time are joined into a unified four-dimensional continuum called spacetime. To gain a sense of what spacetime is like, we must first look at the Euclidean space of classical Newtonian physics. This approach to explaining the theory of special relativity begins with the concept of "length".
In everyday experience, it seems that the length of objects remains the same no matter how they are rotated or moved from place to place; as a result the simple length of an object doesn't appear to change or is invariant. However, as is shown in the illustrations below, what is actually being suggested is that length seems to be invariant in a three-dimensional coordinate system.
The length of a line in a two-dimensional Cartesian coordinate system is given by Pythagoras' theorem:
h^2 = x^2 + y^2. \,
One of the basic theorems of vector algebra is that the length of a vector does not change when it is rotated. However, a closer inspection tells us that this is only true if we consider rotations confined to the plane. If we introduce rotation in the third dimension, then we can tilt the line out of the plane. In this case the projection of the line on the plane will get shorter. Does this mean the line's length changes? – obviously not. The world is three-dimensional and in a 3D Cartesian coordinate system the length is given by the three-dimensional version of Pythagoras's theorem:
k^2 = x^2 + y^2 + z^2. \,
This is invariant under all rotations. The apparent violation of invariance of length only happened because we were "missing" a dimension. It seems that, provided all the directions in which an object can be tilted or arranged are represented within a coordinate system, the length of an object does not change under rotations. With time and space considered to be outside the realm of physics itself, under classical mechanics a 3-dimensional coordinate system is enough to describe the world.
Note that invariance of length is not ordinarily considered a principle or law, not even a theorem. It is simply a statement about the fundamental nature of space itself. Space as we ordinarily conceive it is called a three-dimensional Euclidean space, because its geometrical structure is described by the principles of Euclidean geometry. The formula for distance between two points is a fundamental property of a Euclidean space, it is called the Euclidean metric tensor (or simply the Euclidean metric). In general, distance formulas are called metric tensors.
Note that rotations are fundamentally related to the concept of length. In fact, one may define length or distance to be that which stays the same (is invariant) under rotations, or define rotations to be that which keep the length invariant. Given any one, it is possible to find the other. If we know the distance formula, we can find out the formula for transforming coordinates in a rotation. If, on the other hand, we have the formula for rotations then we can find out the distance formula.
The Minkowski formulation: introduction of spacetime[edit]
Main article: Spacetime
After Einstein derived special relativity formally from the (at first sight counter-intuitive) assumption that the speed of light is the same to all observers, Hermann Minkowski built on mathematical approaches used in non-euclidean geometry[6] and on the mathematical work of Lorentz and Poincaré. Minkowski showed in 1908 that Einstein's new theory could also be explained by replacing the concept of a separate space and time with a four-dimensional continuum called spacetime. This was a groundbreaking concept, and Roger Penrose has said that relativity was not truly complete until Minkowski reformulated Einstein's work.[7]
The concept of a four-dimensional space is hard to visualise. It may help at the beginning to think simply in terms of coordinates. In three-dimensional space, one needs three real numbers to refer to a point. In the Minkowski space, one needs four real numbers (three space coordinates and one time coordinate) to refer to a point at a particular instant of time. This point, specified by the four coordinates, is called an event. The distance between two different events is called the spacetime interval.
A path through the four-dimensional spacetime (usually known as Minkowski space) is called a world line. Since it specifies both position and time, a particle having a known world line has a completely determined trajectory and velocity. This is just like graphing the displacement of a particle moving in a straight line against the time elapsed. The curve contains the complete motional information of the particle.
In the same way as the measurement of distance in 3D space needed all three coordinates, we must include time as well as the three space coordinates when calculating the distance in Minkowski space (henceforth called M). In a sense, the spacetime interval provides a combined estimate of how far apart two events occur in space as well as the time that elapses between their occurrence.
But there is a problem; time is related to the space coordinates, but they are not equivalent. Pythagoras' theorem treats all coordinates on an equal footing (see Euclidean space for more details). We can exchange two space coordinates without changing the length, but we can not simply exchange a space coordinate with time – they are fundamentally different. It is an entirely different thing for two events to be separated in space and to be separated in time. Minkowski proposed that the formula for distance needed a change. He found that the correct formula was actually quite simple, differing only by a sign from Pythagoras' theorem:
where c is a constant and t is the time coordinate.[Note 2] Multiplication by c, which has the dimensions L T−1, converts the time to units of length and this constant has the same value as the speed of light. So the spacetime interval between two distinct events is given by
s^2 = (x_2 - x_1)^2 + (y_2 - y_1)^2 + (z_2 - z_1)^2 - c^2 (t_2 - t_1)^2. \,
There are two major points to be noted. Firstly, time is being measured in the same units as length by multiplying it by a constant conversion factor. Secondly, and more importantly, the time-coordinate has a different sign than the space coordinates. This means that in the four-dimensional spacetime, one coordinate is different from the others and influences the distance differently. This new "distance" may be zero or even negative. This new distance formula, called the metric of the spacetime, is at the heart of relativity. This distance formula is called the metric tensor of M. This minus sign means that a lot of our intuition about distances can not be directly carried over into spacetime intervals. For example, the spacetime interval between two events separated both in time and space may be zero (see below). From now on, the terms distance formula and metric tensor will be used interchangeably, as will be the terms Minkowski metric and spacetime interval.
In Minkowski spacetime the spacetime interval is the invariant length, the ordinary 3D length is not required to be invariant. The spacetime interval must stay the same under rotations, but ordinary lengths can change. Just like before, we were missing a dimension. Note that everything thus far is merely definitions. We define a four-dimensional mathematical construct which has a special formula for distance, where distance means that which stays the same under rotations (alternatively, one may define a rotation to be that which keeps the distance unchanged).
Now comes the physical part. Rotations in Minkowski space have a different interpretation than ordinary rotations. These rotations correspond to transformations of reference frames. Passing from one reference frame to another corresponds to rotating the Minkowski space. An intuitive justification for this is given below, but mathematically this is a dynamical postulate just like assuming that physical laws must stay the same under Galilean transformations (which seems so intuitive that we don't usually recognise it to be a postulate).
Since by definition rotations must keep the distance same, passing to a different reference frame must keep the spacetime interval between two events unchanged. This requirement can be used to derive an explicit mathematical form for the transformation that must be applied to the laws of physics (compare with the application of Galilean transformations to classical laws) when shifting reference frames. These transformations are called the Lorentz transformations. Just like the Galilean transformations are the mathematical statement of the principle of Galilean relativity in classical mechanics, the Lorentz transformations are the mathematical form of Einstein's principle of relativity. Laws of physics must stay the same under Lorentz transformations. Maxwell's equations and Dirac's equation satisfy this property, and hence they are relativistically correct laws (but classically incorrect, since they don't transform correctly under Galilean transformations).
With the statement of the Minkowski metric, the common name for the distance formula given above, the theoretical foundation of special relativity is complete. The entire basis for special relativity can be summed up by the geometric statement "changes of reference frame correspond to rotations in the 4D Minkowski spacetime, which is defined to have the distance formula given above". The unique dynamical predictions of SR stem from this geometrical property of spacetime. Special relativity may be said to be the physics of Minkowski spacetime.[8][9][10][11] In this case of spacetime, there are six independent rotations to be considered. Three of them are the standard rotations on a plane in two directions of space. The other three are rotations in a plane of both space and time: These rotations correspond to a change of velocity, and the Minkowski diagrams devised by him describe such rotations.
As has been mentioned before, one can replace distance formulas with rotation formulas. Instead of starting with the invariance of the Minkowski metric as the fundamental property of spacetime, one may state (as was done in classical physics with Galilean relativity) the mathematical form of the Lorentz transformations and require that physical laws be invariant under these transformations. This makes no reference to the geometry of spacetime, but will produce the same result. This was in fact the traditional approach to SR, used originally by Einstein himself. However, this approach is often considered to offer less insight and be more cumbersome than the more natural Minkowski formalism.
Reference frames and Lorentz transformations: relativity revisited[edit]
Changes in reference frame, represented by velocity transformations in classical mechanics, are represented by rotations in Minkowski space. These rotations are called Lorentz transformations. They are different from the Galilean transformations because of the unique form of the Minkowski metric. The Lorentz transformations are the relativistic equivalent of Galilean transformations. Laws of physics, in order to be relativistically correct, must stay the same under Lorentz transformations. The physical statement that they must be the same in all inertial reference frames remains unchanged, but the mathematical transformation between different reference frames changes. Newton's laws of motion are invariant under Galilean rather than Lorentz transformations, so they are immediately recognisable as non-relativistic laws and must be discarded in relativistic physics. The Schrödinger equation is also non-relativistic.
Maxwell's equations are written using vectors and at first glance appear to transform correctly under Galilean transformations. But on closer inspection, several questions are apparent that can not be satisfactorily resolved within classical mechanics (see History of special relativity). They are indeed invariant under Lorentz transformations and are relativistic, even though they were formulated before the discovery of special relativity. Classical electrodynamics can be said to be the first relativistic theory in physics. To make the relativistic character of equations apparent, they are written using four-component vector-like quantities called four-vectors. Four-vectors transform correctly under Lorentz transformations, so equations written using four-vectors are inherently relativistic. This is called the manifestly covariant form of equations. Four-vectors form a very important part of the formalism of special relativity.
Einstein's postulate: the constancy of the speed of light[edit]
Einstein's postulate that the speed of light is a constant comes out as a natural consequence of the Minkowski formulation.[12]
Proposition 1:
When an object is travelling at c in a certain reference frame, the spacetime interval is zero.
The spacetime interval between the origin-event (0,0,0,0) and an event (x,y,z,t) is
s^2 = x^2 + y^2 + z^2 - (ct)^2 .\,
The distance travelled by an object moving at velocity v for t seconds is:
\sqrt{x^2 + y^2 + z^2} = vt \,
s^2 = (vt)^2 - (ct)^2 .\,
Since the velocity v equals c we have
s^2 = (ct)^2 - (ct)^2 .\,
Hence the spacetime interval between the events of departure and arrival is given by
s^2 = 0 \,
Proposition 2:
An object travelling at c in one reference frame is travelling at c in all reference frames.
Let the object move with velocity v when observed from a different reference frame. A change in reference frame corresponds to a rotation in M. Since the spacetime interval must be conserved under rotation, the spacetime interval must be the same in all reference frames. In proposition 1 we showed it to be zero in one reference frame, hence it must be zero in all other reference frames. We get that
(vt)^2 - (ct)^2 = 0 \,
which implies
|v| = c .
The paths of light rays have a zero spacetime interval, and hence all observers will obtain the same value for the speed of light. Therefore, when assuming that the universe has four dimensions that are related by Minkowski's formula, the speed of light appears as a constant, and does not need to be assumed (postulated) to be constant as in Einstein's original approach to special relativity.
Clock delays and rod contractions: more on Lorentz transformations[edit]
Another consequence of the invariance of the spacetime interval is that clocks will appear to go slower on objects that are moving relative to the observer. This is very similar to how the 2D projection of a line rotated into the third dimension appears to get shorter. Length is not conserved simply because we are ignoring one of the dimensions. Let us return to the example of John and Bill.
John observes the length of Bill's spacetime interval as:
s^2 = (vt)^2 - (ct)^2 \,
whereas Bill doesn't think he has traveled in space, so writes:
s^2 = (0)^2 - (cT)^2 \,
The spacetime interval, s2, is invariant. It has the same value for all observers, no matter who measures it or how they are moving in a straight line. This means that Bill's spacetime interval equals John's observation of Bill's spacetime interval so:
(0)^2 - (cT)^2 = (vt)^2 - (ct)^2 \,
t = \frac{T}{\sqrt{1 - \frac{v^2}{c^2}}} \,.
So, if John sees a clock that is at rest in Bill's frame record one second, John will find that his own clock measures between these same ticks an interval t, called coordinate time, which is greater than one second. It is said that clocks in motion slow down, relative to those on observers at rest. This is known as "relativistic time dilation of a moving clock". The time that is measured in the rest frame of the clock (in Bill's frame) is called the proper time of the clock.
In special relativity, therefore, changes in reference frame affect time also. Time is no longer absolute. There is no universally correct clock; time runs at different rates for different observers.
Similarly it can be shown that John will also observe measuring rods at rest on Bill's planet to be shorter in the direction of motion than his own measuring rods.[Note 3] This is a prediction known as "relativistic length contraction of a moving rod". If the length of a rod at rest on Bill's planet is X, then we call this quantity the proper length of the rod. The length x of that same rod as measured on John's planet, is called coordinate length, and given by
x = X \sqrt{1 - \frac{v^2}{c^2}} \,.
How Bill's coordinates appear to John at the instant they pass each other
These two equations can be combined to obtain the general form of the Lorentz transformation in one spatial dimension:
T &= \gamma \left( t - \frac{v x}{c^{2}} \right) \\
X &= \gamma \left( x - v t \right)
or equivalently:
t &= \gamma \left( T + \frac{v X}{c^{2}} \right) \\
x &= \gamma \left( X + v T \right)
where the Lorentz factor is given by
\gamma = { 1 \over \sqrt{1 - v^2/c^2} }
The above formulas for clock delays and length contractions are special cases of the general transformation.
Alternatively, these equations for time dilation and length contraction (here obtained from the invariance of the spacetime interval), can be obtained directly from the Lorentz transformation by setting X = 0 for time dilation, meaning that the clock is at rest in Bill's frame, or by setting t = 0 for length contraction, meaning that John must measure the distances to the end points of the moving rod at the same time.
A consequence of the Lorentz transformations is the modified velocity-addition formula:
s = {v+u \over 1+(v/c)(u/c)}.
Simultaneity and clock desynchronisation[edit]
The last consequence of Minkowski's spacetime is that clocks will appear to be out of phase with each other along the length of a moving object. This means that if one observer sets up a line of clocks that are all synchronised so they all read the same time, then another observer who is moving along the line at high speed will see the clocks all reading different times. This means that observers who are moving relative to each other see different events as simultaneous. This effect is known as "Relativistic Phase" or the "Relativity of Simultaneity". Relativistic phase is often overlooked by students of special relativity, but if it is understood, then phenomena such as the twin paradox are easier to understand.
The "plane of simultaneity" or "surface of simultaneity" contains all those events that happen at the same instant for a given observer. Events that are simultaneous for one observer are not simultaneous for another observer in relative motion.
Observers have a set of simultaneous events around them that they regard as composing the present instant. The relativity of simultaneity results in observers who are moving relative to each other having different sets of events in their present instant.
The net effect of the four-dimensional universe is that observers who are in motion relative to you seem to have time coordinates that lean over in the direction of motion, and consider things to be simultaneous that are not simultaneous for you. Spatial lengths in the direction of travel are shortened, because they tip upwards and downwards, relative to the time axis in the direction of travel, akin to a skew or shear of three-dimensional space.
Great care is needed when interpreting spacetime diagrams. Diagrams present data in two dimensions, and cannot show faithfully how, for instance, a zero length spacetime interval appears.
General relativity: a peek forward[edit]
Unlike Newton's laws of motion, relativity is not based upon dynamical postulates. It does not assume anything about motion or forces. Rather, it deals with the fundamental nature of spacetime. It is concerned with describing the geometry of the backdrop on which all dynamical phenomena take place. In a sense therefore, it is a meta-theory, a theory that lays out a structure that all other theories must follow. In truth, special relativity is only a special case. It assumes that spacetime is flat. That is, it assumes that the structure of Minkowski space and the Minkowski metric tensor is constant throughout. In general relativity, Einstein showed that this is not true. The structure of spacetime is modified by the presence of matter. Specifically, the distance formula given above is no longer generally valid except in space free from mass. However, just like a curved surface can be considered flat in the infinitesimal limit of calculus, a curved spacetime can be considered flat at a small scale. This means that the Minkowski metric written in the differential form is generally valid.
ds^2 = dx^2 + dy^2 + dz^2 - c^2 dt^2 \,
One says that the Minkowski metric is valid locally, but it fails to give a measure of distance over extended distances. It is not valid globally. In fact, in general relativity the global metric itself becomes dependent on the mass distribution and varies through space. The central problem of general relativity is to solve the famous Einstein field equations for a given mass distribution and find the distance formula that applies in that particular case. Minkowski's spacetime formulation was the conceptual stepping stone to general relativity. His fundamentally new outlook allowed not only the development of general relativity, but also to some extent quantum field theories.
Mass–energy equivalence[edit]
As we increase an object's energy by accelerating it, such that its speed approaches the speed of light from an observer's point of view, its total (relativistic) mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. This ultimately leads to the concept of mass-energy equivalence.
Any object that has mass when at rest (in a given inertial frame of reference), equivalently has rest energy as can be calculated using Einstein's equation E=mc2. Rest energy, being a form of energy, is interconvertible with other forms of energy. As with any energy transformation, the total amount of energy does not increase or decrease in such a process. From this perspective, the amount of matter in the universe contributes to its total energy.
Similarly, the total of amount of energy of any system also manifests as an equivalent total amount of mass, not limited to the case of the relativistic mass of a moving body. For example, adding 25 kilowatt-hours (90 megajoules) of any form(s) of energy to an object increases its mass by 1 microgram. If you had a sensitive enough mass balance or scale, this mass increase could be measured. Our Sun (or a nuclear bomb) converts nuclear potential energy to other forms of energy; its total mass doesn't decrease due to that in itself because it still contains the same total energy in different forms, but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There is a common perception that relativistic physics is not needed for practical purposes or in everyday life. This is not true. Without relativistic effects, gold would look silvery, rather than yellow.[13] Many technologies are critically dependent on relativistic physics:
The postulates of special relativity[edit]
Einstein developed special relativity on the basis of two postulates:
Special relativity can be derived from these postulates, as was done by Einstein in 1905. Einstein's postulates are still applicable in the modern theory but the origin of the postulates is more explicit. It was shown above how the existence of a universally constant velocity (the speed of light) is a consequence of modeling the universe as a particular four-dimensional space having certain specific properties. The principle of relativity is a result of Minkowski structure being preserved under Lorentz transformations, which are postulated to be the physical transformations of inertial reference frames.
See also[edit]
• The mass of objects and systems of objects has a complex interpretation in special relativity, see relativistic mass.
• "Minkowski also shared Poincaré's view of the Lorentz transformation as a rotation in a four-dimensional space with one imaginary coordinate, and his five four-vector expressions." (Walter 1999).
1. ^ There exists a more technical but mathematically convenient description of reference frames. A reference frame may be considered to be an identification of points in space at different times. That is, it is the identification of space points at different times as being the same point. This concept, particularly useful in making the transition to relativistic spacetime, is described in the language of affine space by VI Arnold in Mathematical Methods in Classical Mechanics, and in the language of fibre bundles by Roger Penrose in The Road to Reality.
2. ^ Originally Minkowski tried to make his formula look like Pythagoras's theorem by introducing the concept of imaginary time and writing −1 as i2. But Wilson, Gilbert, Borel and others proposed that this was unnecessary and introduced real time with the assumption that, when comparing coordinate systems, the change of spatial displacements with displacements in time can be negative. This assumption is expressed in differential geometry using a metric tensor that has a negative coefficient.
3. ^ It should also be made clear that the length contraction result only applies to rods aligned in the direction of motion. At right angles to the direction of motion, there is no contraction.
1. ^ "On the Electrodynamics of Moving Bodies". ( web site): Translation from the German article: "Zur Elektrodynamik bewegter Körper", Annalen der Physik. 17:891-921. (June 30, 1905)
2. ^ Peter Gabriel Bergmann (1976). Introduction to the Theory of Relativity. Reprint of first edition of 1942 with a forward by A. Einstein. Courier Dover Publications. pp. xi. ISBN 0-486-63282-2.
3. ^ "Définition du mètre". Résolution 1 de la 17e réunion de la CGPM (in French). Sèvres: Bureau International des Poids et Mesures. 1983. Retrieved 2008-10-03. Le mètre est la longueur du trajet parcouru dans le vide par la lumière pendant une durée de 1/299 792 458 de seconde. English translation: "Definition of the metre". Resolution 1 of the 17th meeting of the CGPM. Retrieved 2008-10-03.
6. ^ Walter, S.(1999) The non-Euclidean style of Minkowskian relativity. The Symbolic Universe, J. Gray (ed.), Oxford University Press, 1999
7. ^ Penrose, Roger (2004). The Road to Reality. Vintage. p. 406. ISBN 9780099440680.
8. ^ Einstein, Albert (2001). Relativity : the special and general theory. Authorised translation by Robert W. Lawson (Reprinted ed.). London: Routledge. p. 152. ISBN 978-0-415-25538-7. It appears therefore more natural to think of physical reality as a four dimensional existence, instead of, as hitherto, the evolution of a three dimensional existence.
9. ^ Feynman, Richard P. (1999). Six not-so-easy pieces : Einstein's relativity, symmetry and space-time. London: Penguin Books. p. xiv. ISBN 978-0-14-027667-1. The idea that the history of the universe should be viewed, physically, as a four-dimensional spacetime, rather than as a three dimensional space evolving with time is indeed fundamental to modern physics.
10. ^ Weyl, Hermann (1952) [1918]. Space, time, matter. (4th ed.). New York: Dover Books. ISBN 978-0-486-60267-7. : "The adequate mathematical formulation of Einstein's discovery was first given by Minkowski: to him we are indebted for the idea of four dimensional world-geometry, on which we based our argument from the outset."
11. ^ Thorne, Kip; Blandford, Roger. "Chapter 1: Physics in Euclidean Space and Flat Spacetime: Geometric Viewpoint" (PDF). Ph 136: Applications of classical physics. Caltech. Special relativity is the limit of general relativity in the complete absence of gravity; its arena is flat, 4-dimensional Minkowski spacetime.
13. ^ "Relativity in Chemistry". Retrieved 2009-04-05.
External links[edit]
Special relativity for a general audience (no math knowledge required)[edit]
Special relativity explained (using simple or more advanced math)[edit] |
5979726d4616ace1 | Cover Story
Unlike linear waves, solitons create their own channel as they travel in a uniform medium, remaining localized and preserving their shape. Whereas linear waves always pass through one another, solitons can be dramatically altered by... more>>
Massive Soliton WDM Transmission at N × 10 Gbit/sec, Error-free Over Transoceanic Distances
We have demonstrated massive wavelength division multiplexing (WDM), over transoceanic distances, in multiples of 10 Gbit/sec. The vital ingredients to this success were first, solitons, second, sliding-frequency guiding filters, and third, the use of "dispersion-tapered" fiber spans between amplifiers, i.e., spans for which D(z) tends to follow (here in step-wise approximation), the same exponential decay profile as the signal energy. Although the first two ingredients and their benefits are by now well known, the third, at least in this context, is both novel and vital. more>>
Bright Temporal Soliton-like Pulses in Self-defocusing Media
Bright temporal solitons are generating a great deal of interest because of their possible use in long distance optical fiber communications. They maintain their temporal shape by cancelling the detrimental effects of both the dispersion and the Kerr nonlinearity. Until recently, the only demonstration of bright temporal solitons has been in silica optical fibers, which possess a self-focusing nonlinearity (n2 > 0) and an anomalous dispersion (β2 < 0) for λ > 1312 nm. The nonlinear Schrödinger equation (NLS), which describes the propagation of an optical pulse through a nonlinear optical medium with chromatic dispersion, allows bright temporal solitons as long as the nonlinear refraction (n2) is of opposite sign to the dispersion (β2). Hence, a medium with normal dispersion and self-defocusing nonlinearity should also support bright temporal solitons. more>>
New Semiconductor Materials Offer Promise for Ultra-fast Optical Devices
Manakov Spatial Solitons
The Manakov soliton is a two-component soliton that was first considered by Manakov in the early 1970s. Based on the work of Zakharov and Shabat, Manakov found that the coupled nonlinear Schrödinger (CNSE) equations with special choice of the coefficients in front of nonlinear terms can be solved exactly. This system is inte¬grable and solitons have therefore a number of special properties which might be useful in practice. more>>
Novel Resonant Structures for Laser Light Modulation
We have recently developed and demonstrated, for the first time, novel resonant grating/waveguide structures, that can modulate laser light at relatively high rates. We believe that these can be incorporated into arrays to form compact spatial light modulators operating at several hundred megahertz. more>>
Optical Modulation with a Resonant Tunneling Diode
Optoelectronics Technology Consortium 32-Channel Parallel Fiber Optic Transmitter/Receiver Array Testbed
In the data communications industry, there are emerging requirements for a short distance (tens to hundreds of meters), high-speed (200 Mbit/sec-1 Gbit/sec) data bus for large computing environments, clustered parallel computing systems, and datacom switching. In response to these requirements, a parallel optical fiber interconnect has been developed by the Optoelectronics Technology Consortium (OETC), an ARPA-funded industry alliance including IBM, AT&T, Honeywell, and Lockheed Martin. This year, IBM completed testing of a 32-channel OETC fiber optic transmitter/receiver array in a product testbed, and announced future availability of a commercial product called "Jitney" based on the OETC prototype. more>>
Opto-Electronic Microwave Oscillator
Photonic applications are important in RF communication systems to enhance many functions including remote transfer of antenna signals, carrier frequency up or down conversion, antenna beam steering, and signal filtering. Many of these functions require reference frequency oscillators. However, traditional microwave oscillators cannot meet all the requirements of photonic communication systems that need high frequency and low phase noise signal generation. Because photonic systems involve signals in both optical and electrical domains, an ideal signal source should be able to provide electrical and optical signals. In addition, it should be possible to synchronize or control the signal source by both electrical and optical means. more>>
Simultaneous Laser Diode Emission and Detection for Optical Fiber Sensors
Reflective fiber optic intensity sensors often use a coupler to guide part of the reflected light back to a photodetector. We have demonstrated a sensor that requires no coupler, and instead uses the emitting laser diode for photodetection. A laser diode operating at constant current can detect light reflected into the junction region if the terminal voltage is monitored. We used a self-detecting source to sense rotation using a simple magneto-optic transducer and single-fiber sensor system. more>>
Three Dimensional Reconstruction of Random Radiation Sources
Room Temperature, Mid-Infrared Quantum Cascade Lasers
The quantum cascade (QC) laser1 is a new optical source in which one type of carrier, typically electrons, cascading down an electronic staircase... more>>
Achievement of the Saturation Limit and Energy Extraction in a Discharge Pumped Table-Top Soft X-ray Amplifier
Amajor goal in ultrashort wavelength laser research is the development of compact "table-top" amplifiers capable of generating soft X-ray pulses of substantial energy that can impact applications. Such development motivates the demonstration of gain media generated by compact devices, that can be successfully scaled in length to reach gain saturation. At this condition, which occurs when the laser intensity reaches the saturation intensity, a large fraction of the energy stored in the laser's upper level can be extracted. To date, gain saturation had only been achieved in a few soft X-ray laser transitions in plasmas generated by some of the world's largest laser facilities. more>>
Multiple-wavelength Vertical Cavity Laser Arrays with Wide Wavelength Span and High Uniformity
Vertical-cavity surface-emitting lasers (VCSELs) are promising for numerous applications. In particular, due to their inherent single Fabry-Perot mode operation, VCSELs can be very useful for wavelength division multiplexing (WDM) systems allowing high bandwidth and high functionalities. Multiple wavelength VCSEL arrays with wide channel spacings ( 10 nm) provide an inexpensive solution to increasing the capacity of local area networks without using active wavelength controls. more>>
New Techniques in Wideband Terahertz Spectroscopy
In recent years, remarkable progress has been made in the development of spectroscopic capabilities for coherent terahertz (THz) measurements. This spectral region is one of great interest because of the abundance of excitations in molecular systems and condensed media. It also represents a region in which the dielectric properties of materials are of critical importance for high frequency electronics and optoelectronics. A key ingredient to the significant advances in this field is the development of broadband, optically driven sources and detectors of terahertz radiation. The ready availability of laser pulses with durations of ~10 fsec suggests the potential for extending the bandwidth of coherent spectroscopy to significantly higher frequencies. By using materials with an instantaneous nonlinear optical response for both emission and detection, we may be able to capture much of this enormous bandwidth. more>>
After Image
Interferometric Optical Tweezers
Optical Patterning of Three-Dimensional Spatio-Tensorial Micro-Structures in Polymers
One challenging requirement for the design of devices for photonic applications is to achieve complete manipulation of molecular order. The great latitude and flexibility of optical methods offers interesting prospects for material engineering using light-matter interactions. Efficient spatial modulation of polymer macroscopic properties is usually achieved using holographic recording of an interference pattern between intense light-waves. For second-order optical nonlinear processes, a full control of the molecular orientation is mandatory. However, patterning with polarized monochromatic beams results only in molecular alignment. We report on a new, purely optical technique based on a non-classical holographic process with coherent mixing of dual-frequency fields. It enables efficient and complete three-dimensional spatio-tensorial control of polymer micro-structures. more>>
Spontaneous Density Grating Formation in Hot Atomic Vapor
Recently a new gain mechanism has been observed in a nonlinear optical system: The spontaneous formation of a density grating in an atomic vapor through interaction with a strong pump field. A sodium filled cell is pumped by a high intensity (I 104 W/cm2) circularly polarized laser beam detuned from resonance and is probed by a weak field degenerate in frequency with the pump and with the same polarization. The probe beam is introduced into the cell in two different geometrical configurations: Nearly parallel (angle 5°) and nearly antiparallel (same angle, but opposite direction) to the pump. For sufficiently high pump intensity, and for appropriate values of detuning and atomic density, the probe beam displays a gain as large as 30% (pumping only a small fraction of the probe cross section) at the expense of the pump, only in the nearly counterpropagating geometry. more>>
Single-Atom Quantum Logic Gate and Schrödinger Cat State
One of the fundamental tenets of quantum mechanics is the existence of superposition states, or states whose properties simultaneously possess two or more distinct values. Although quantum superpositions and entanglements seldom appear outside of the microscopic quantum world, there is growing interest in the creation of "big" superpositions and massively entangled states for use in applications such as a quantum computer. We report first steps toward this goal by demonstrating a fundamental two-bit quantum logic gate and a "Schrödinger cat"-like state of motion with a single trapped 9Be+ ion. Both experiments allow sensitive measurements of decoherence mechanisms which will play an important role in the feasibility of quantum computation. more>>
Polarization-entangled Photons and Quantum Dense Coding
Entangled states of particles form the cornerstone of the newly emerging field of quantum information: they are central to tests of nonlocality, have been proposed for use in quantum cryptography schemes, and would arise automatically in the operation of quantum computers. Polarization-entangled photons are preferable because they are easier to handle. more>>
Excitation of a Schrödinger Cat State Within an Atom
Excess Quantum Noise Fluctuations in Unstable-resonator Lasers
Experiments completed during the past year confirm the existence of a sizable excess quantum noise factor in lasers using unstable optical resonators or, more generally, resonators with nonorthogonal oscillation modes. more>>
Self-Trapping of Partially Spatially Incoherent Light Beams
Here, we report the first observation of self-trapping of a "partially" spatially incoherent optical beam in a nonlinear medium. Self-trapping occurs in both transverse dimensions, when diffraction is exactly balanced by photorefractive self-focusing. We have used the photorefractive nonlinearity associated with photorefractive solitons as the self trapping mechanism and generated a stable, two-dimensional, 30-μm wide, spatially incoherent self-trapped beam. more>>
Supramolecular Enhancement of Second-Order Optical Nonlinearity
Only noncentrosymmetric molecules can possess a second-order nonlinear response, i.e., they have a nonvanishing first molecular hyperpolarizability. Polar molecules with donor and acceptor groups connected by a conjugated π-electron system are traditional organic second-order materials (Fig. 1). For macroscopic noncentrosymmetry, such molecules are poled in a host material using a static electric field. The nonlinear coefficients of poled materials are proportional to μβ where μ is the permanent dipole moment of the molecules and β is the vectorial part of the first hyperpolarizability. more>>
Stopping Light in its Tracks
Isotropic Liquid Crystal Fiber Structures for Passive Optical Limiting of Short Laser Pulses
Ever since the invention of the laser, there has been a need to protect the eye or sensitive optical sensors from damage by overexposure. The problem has become increasingly difficult with the advent of frequency agile high power pulsed lasers, which negate fixed line filters or optoelectronics/mechanical devices; all-optical or nonlinear optical means have to be used. In this context, various device concepts and nonlinear optical materials are being investigated. To satisfy such stringent requirements, it has become necessary to optimize both the device function and the material responses by specialized optical configurations. One means of achieving this is to use fiber or waveguide geometry in which highly intensity dependent (optical limiting) processes occur more efficiently due to the spatial confinement over distances much longer than the Rayleigh range of tightly focused lasers. more>>
Texture in Binary Images
IImage texture is one of the important parameters in the field of digital image processing. In displayed images, it affects the reproduction of the local average gray level, because usually there is a certain amount of pixel overlap. In image perception, it may result in the appearance of false contours between regions with different textures. There is a demand for a quantitative description of textural characteristics in the various fields of digital image processing, of which digital halftoning is one. more>>
Photonic Signal Processing for Biomedical and Industrial Ultrasonic Probes
Ultrasonics has been widely used in medical, indus¬trial, and scientific applications. In medical applications, ultrasonics is an essential diagnostic method in internal medicine, urology, and vascular surgery. High-Intensity Focussed Ultrasound (HIFU) and lithotripsy applications use relatively low ultrasonic frequencies (< 100 KHz), while a 5-15 MHz band is typically used in diagnostic external cavity imaging ultrasound. Today, with endoscopic applications in mind, a very high ultrasonic frequency, e.g., 100 MHz, probe with high (> 50%) instantaneous bandwidths is highly desirable as higher frequencies give higher imaging resolution and smaller physical dimensions of the front-end intracavity transducer array. more>>
Atomic Lifetimes From Molecular Spectroscopy
Although molecular properties are clearly related to the properties of the constituent atoms, it has seldom been possible to make precision measurements of these atomic properties by examining the molecules. Over the last year or so, however, molecular spectroscopy has been shown to be a powerful technique for determining atomic lifetimes and has provided the most precise alkali lifetimes yet reported, at levels ranging from 0.3% to 0.03%. more>>
Multiphoton Ionization with Precise Intensity Control
In the presence of strong laser fields (> 1012 W/cm2), atoms and molecules can simultaneously absorb many photons to exceed the ionization limit, leading to the ejection of photoelectrons. The analysis of photoelectron kinetic energy spectra provides valuable insight into atomic and molecular structures. The kinetic energy can be determined by measuring the time-of-flight of the electrons over a known distance. more>>
Atomic Streak Camera Sees Rydberg Atoms Falling Apart
Highly excited or Rydberg atoms are an ideal quantum laboratory. In a Rydberg atom, the loosely bound electron moves in a large Kepler orbit around the atomic nucleus and is very sensitive to external perturbations. For instance, by applying a moderate electric field, the behavior of the quantum system is drastically influenced. A static field of a few Kilovolts per centimeter is sufficient to change the bound Rydberg atom into a system in which the electron can escape. Within a few picoseconds (10-12 sec) the atom falls apart. It is an experimental challenge to detect how this decay actually happens. Does the electron come out immediately, or does the atom emit the electron in subsequent bursts of probability, that are signatures of the quantum nature of the system? more>>
Bragg Scattering from an Optical Lattice
Two-dimensional Photonic Bandgap Structures at 850 nm
Nanoparticle-Enhanced Photodetection
Intracavity Phase Modulated Transmitter for Hybrid Lidar-Radar
This paper discusses the development of a microwave-modulated transmitter using a bulk phase modulator for a novel hybrid lidar-radar application. Aerial lidar (light detection and ranging) is used for underwater surveillance. A pulse of blue-green optical radiation is transmitted from an airborne platform, and target information is extracted from the detected echo. Attenuation, dispersion, backscatter clutter, and particularly the lack of coherent signal processing, limit the performance of lidar. more>>
An Intuitive User Interface for Remote Adjustment of Optical Elements
Nonlinear Optics Using Atomic Coherence Effects
Nonlinear optical mixing of existing laser frequencies to access portions of the spectrum where lasing action is not easily obtainable is common practice today. Various techniques, including important new ones like quasi-phasematch¬ing in tailored nonlinear-media are aimed at efficient generation in the region of the spectrum from just under 200 nm in the ultraviolet to about a few microns in the infrared. more>>
1997 Funding for R&D Up: Poised to Plummet toward 2002
The results are in for 1997 federal appropriations for science. According to AAAS, Congress appropriated $74 billion for R&D—an increase of 4.0% from last year. About $14.8 billion of the total goes for basic research, an increase of 2.7%. R&D funding kept ahead of inflation, but the slope downward will have to get steeper to balance the budget by 2002. more>>
New Terahertz Beam Imaging Device
|
dbe3b10d8534e3a6 |
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Conservation of energy wikipedia, lookup
Woodward effect wikipedia, lookup
Lorentz force wikipedia, lookup
Speed of gravity wikipedia, lookup
Electrical resistance and conductance wikipedia, lookup
Gravity wikipedia, lookup
Weightlessness wikipedia, lookup
Free fall wikipedia, lookup
Electromagnetism wikipedia, lookup
Work (physics) wikipedia, lookup
Anti-gravity wikipedia, lookup
Faster-than-light wikipedia, lookup
Time in physics wikipedia, lookup
Classical central-force problem wikipedia, lookup
Newton's laws of motion wikipedia, lookup
Mass versus weight wikipedia, lookup
Electromagnetic mass wikipedia, lookup
History of thermodynamics wikipedia, lookup
Nuclear physics wikipedia, lookup
Negative mass wikipedia, lookup
Chapter 32 | Medical Applications of Nuclear Physics
Chapter 32 Homework
Conceptual Questions
32.1 Medical Imaging and Diagnostics
1. In terms of radiation dose, what is the major difference between medical diagnostic uses of radiation and medical therapeutic
2. One of the methods used to limit radiation dose to the patient in medical imaging is to employ isotopes with short half-lives.
How would this limit the dose?
32.2 Biological Effects of Ionizing Radiation
3. Isotopes that emit α radiation are relatively safe outside the body and exceptionally hazardous inside. Yet those that emit γ
radiation are hazardous outside and inside. Explain why.
4. Why is radon more closely associated with inducing lung cancer than other types of cancer?
5. The RBE for low-energy
β s is 1.7, whereas that for higher-energy β s is only 1. Explain why, considering how the range of
radiation depends on its energy.
6. Which methods of radiation protection were used in the device shown in the first photo in Figure 32.35? Which were used in
the situation shown in the second photo?
Figure 32.35 (a) This x-ray fluorescence machine is one of the thousands used in shoe stores to produce images of feet as a check on the fit of shoes.
They are unshielded and remain on as long as the feet are in them, producing doses much greater than medical images. Children were fascinated with
them. These machines were used in shoe stores until laws preventing such unwarranted radiation exposure were enacted in the 1950s. (credit:
Andrew Kuchling ) (b) Now that we know the effects of exposure to radioactive material, safety is a priority. (credit: U.S. Navy)
7. What radioisotope could be a problem in homes built of cinder blocks made from uranium mine tailings? (This is true of homes
and schools in certain regions near uranium mines.)
8. Are some types of cancer more sensitive to radiation than others? If so, what makes them more sensitive?
9. Suppose a person swallows some radioactive material by accident. What information is needed to be able to assess possible |
e0e23525349c4acc | Growing ENTJ/INTJ/ENFJ Populations
Alright, alright. I will admit that this probably wouldn't work, completely. But, I do think that the world would be a better place if there were more ENTJ, ENFJ, INTJ types. So, the bottom line is still the same, "Go forth and make more of us."
He invented the A-bomb... I personally think the world would have been nicer without nuclear weapons :laughing:
I do think that all the types contribute to the world - in good or bad ways. But I prefer to be around N-types :slight_smile:
Isn't J. Robert Oppenheimer the father of the A-bomb?
Here are some of Einstein's contributions to the world:
* The special theory of relativity, which reconciled mechanics with electromagnetism
* The general theory of relativity, a new theory of gravitation obeying the equivalence principle.
* Founding of relativistic cosmology with a cosmological constant
* The first post-Newtonian expansion, explaining the perihelion advance of Mercury
* Prediction of the deflection of light by gravity and gravitational lensing
* The theory of density fluctuations in gasses and liquids, giving a criterion for critical opalescence
* The photon theory and wave-particle duality derived from the thermodynamic properties of light
* The quantum theory of atomic motion in solids
* Zero-point energy
* The semiclassical version of the Schrödinger equation
* Relations for atomic transition probabilities which predicted stimulated emission
* The EPR paradox
* A program for a unified field theory
* The geometrization of fundamental physics.
Ace, you're right about that I think; in fact Wikipedia doesn't even mention Einstein in the article History of nuclear weapons :nerd:
I'm glad you cleared his name - I've always considered Einstein responsible for nuclear weapons, which is obviously really unfair :naughty:
And that is a good example of one ENTJ-issue; jumping to conclusions and being hard on others. Good thing we're not alone :mrgreen:
Your theory that the other types want to be ENTJs because they admire certain qualities about them is flawed. All the types receive admiration at some point, and I see no evidence that ENTJs receive any more praise than any of the other types. Also, I can just as easily find a post where someone is praising ISFPs and by your logic this means that all the people on that forum want to be that type.
Here's a thread praising ENFPs: ... php?t=1729
Does that now mean that all the people on that forum want to be ENFPs?
Just because I respect, admire or even envy someone, doesn't mean I want to be them. I admire, respect and envy Chris Cornell's singing voice, but that doesn't mean I want to be Chris Cornell, nor does it mean I want to sing exactly like Chris Cornell, I place too much value on individuality for that. It just means that I would like to be able to sing as well as him. The same goes for ENTJs, there are traits about them that I admire and that I'd like to be able to do as well as them, but that doesn't mean I want to be one of them. Even in the example you posted of an INTJ saying he admired ENTJs, the INTJ in question also mentioned how INTJs are better then ENTJs when it comes to self-understanding and self-awareness. Why you would use this as an example of a different personality type expressing an inherent inferiority to ENTJs is beyond me. Did you simply filter that part out because it didn't fit in with your belief that everybody wants to be an ENTJ?
And also, what does your theory make of ENTJs that profess an admiration for the qualities of other types? What if an ENTJ on this board said that they love the creativity of ISFPs, for example? What would you make of that post and why (if at all) would you treat it any differently from an ISFP professing admiration for the self-motivational abilities of an ENTJ for example?
Oh , and your theory that ENTJ are the most "admired" type is just plain wrong anyway. There are usually threads on all the forums about "which type would you like to be if you couldn't be your own type" and "which types do you like the most". On the NF forums in particular ENTJs don't come anywhere even near the top choices. ... hp?t=10083
Regarding your arguments for the genetic side of personality, there's no sure proof either way but I think it likely that it works in the same (or at least a similar) way as other "mental" qualities. That is, that you will have inherited tendencies (like with mental illness, for example). Whilst your genes will give you an increased tendency to develop into a certain type, ultimately it will be your environment that determines whether or not you will turn out that way. That's why there are so many ENTJs in your family, inherited potential was combined with an environment that encouraged ENTJ development. If you'd been adopted out at a young enough age to say, an ENFP family that placed a high emphasis on ENFP qualities, you would have been less likely to develop into an ENTJ, despite your inherited genetic tendency. It still would have been possible, just less likely.
Therefore, as far as your plan goes, it wouldn't be enough to simply pair up ENTJs together, you'd have to control the environment to an incredible level of detail to avoid what we might call "pollution". Also, random mutations and latent recessive genes could still lead to other types turning up regardless of your efforts. What would you do with these random examples, should they occur? If you prevent them from breeding, you will have created an intolerant, repressive and unjust society, which is what you were trying to avoid in the first place.
And anyway, even if you could make the whole world just those three types you want, it would not guarantee peace, because all types have their bad eggs, including those three. For example, Stalin was quite possibly an ENTJ, and Charles Manson is quite possibly an ENFJ (etc. etc.).
As for your claim that ENTJs are the rarest type. That simply is not supported by the evidence. ... evelopment
As you can see from these sites (and others), ENTJs on average come in at roughly the third or fourth rarest. The rarest type tends to be INFJ.
Moving on, as others have already said Einstein didn't help create the atom bomb, he came up with the theory that was used as the basis for the bomb, but he had no control over how that theory was used, and that same theory has been used in many other more positive ways that have helped to make the world a much better and more informed place to live in. All the other types can produce examples of people that have contributed positively to the world and even if those individuals did things that could be described as negative contributions, they were only human and nobody who has ever lived has been perfect. This applies just as much to even the greatest and most wonderful ENTJs, INTJs and ENFJs who have ever lived too.
In your opinion, you haven't actually provided any real evidence to back that assertion up whatsoever.
As others have mentioned, INTJs love to be independent. There is absolutely no guarantee that they would follow the orders the ENTJs gave them. Also, ENFJs love to lead too, there is likewise absolutely no guarantee that they would follow the ENTJs orders either. In fact, I think it likely that the majority of ENFJs and ENTJs would end up fighting for dominance and the INTJs would attempt to secede from both of them altogether, thereby introducing war and conflict into your idealised world.
Plus, N types generally tend to dislike physical labour. You would therefore basically have to force people (through things like ballots perhaps, if not just brute force) to do things like construction work, for example. Unless you set up some sort of rota system they would inevitably begin to resent this, and they may well come to resent it anyway (they may be interrupted whilst working on a theory for example). Additionally, S types generally are capable of having more profound insights into improvements that can be made to physical systems than N types are, so progress in these areas would be slowed, plus they can also give new and unusual insight into areas that are more traditionally N territory, so progress would be slowed here too. And that's just the N/S divide, the same applies to the other letters too.
No-one uses just one judgement function exclusively, and everyone can learn to use the others more effectively. They will always have a preference for their main type, but that doesn't preclude them from making decision using the other functions. For example, I prefer to use Fi to make decisions, but that doesn't stop me from using Fe, Ti or even Te if I think the situation calls for it. The more mature and developed an individual is, the more comfortable he or she is with this fact and the more he or she does it.
That's fair enough, but do you then give money to charities that help homeless people out of their predicaments? If not your statement is a touch hypocritical isn't it? "I'm not going to give you money because you wont use it to improve yourself, but I wont give money to a group that can help you to improve yourself either. So I'm not actually doing anything to help you whilst still judging you for not doing anything to help yourself, despite the fact that you lack the means to do so".
I will try to keep this simple.
I already defined that the original plan is flawed.
I appreciate the sites you used as evidence to show that ENTJs are not the rarest type. The site I used to make my statement that ENTJs are the rarest type had us at 1.2% and INFJ at 1.3%. I can't find it, you win until I do. I highly doubt that I was mixed up; more probably, the site was updated. Regardless, your examples outweigh my example.
I was harsh when I said that all of the other types are "ruining" the world. A better way of saying what I mean more clearly is that other types are causing more problems than ENTJs. We get things done, beneficial things.
I have surveyed plenty of sites and ENTJs get the most admiration.
I'm not Hitler. I value free will. I believe in peace by peace, and violence only in self defense.
Einstein theorized about bringing the atom bomb into existence, he is not innocent. (Emphasis on the period that ends that sentence) Einstein was a terrible example to use as someone who didn't ruin the world. A better example would have been Gandhi. Also, Einstein was a terrible father.
We don't "need" all of the types to make the world go 'round. If a type was gone tomorrow, existence would move on. World leaders should all be altruistic and rational. We don't need to wipe any type out, but we should increase the amount of rational, altruistic people to make the chances of our world leaders being of an altruistic, or at least rational type increase.
Choosing exactly what tool(s) you will use to make a decision is part of the process of making a decision.
Charities: Yes, I suggest
In 1999 Time magazine named him the Person of the Century, beating contenders like Mahatma Gandhi and Franklin Roosevelt, and in the words of a biographer, "to the scientifically literate and the public at large, Einstein is synonymous with genius."
Einstein: "My part in producing the atomic bomb consisted in a single act: I signed a letter to President Roosevelt, pressing the need for experiments on a larger scale in order to explore the possibilities for the production of an atomic bomb.
I was fully aware of the terrible danger to mankind in case this attempts succeeded. But the likelihood that the Germans were working on the same problem with a chance of succeeding forced me to this step. I could do nothing else although I have always been a convinced pacifist. To my mind, to kill in war is not a whit better than to commit ordinary murder."
From: ... Quotes.htm
This thread is an embarrassment to mankind. I suggest you delete it.
On what are you basing this? On the surface this statement sounds like nothing more than your own subjective opinion, but if you have evidence that shows otherwise I'd be glad to see it.
Yes, some of you are remarkable people, truly shining examples of the positive potential of humanity. Some of you however are c*nts, people who make the world a worse place to live every time they breathe. Most of you are somewhere in between.
In this, you are just like every other type.
We appear to have a difference of opinion, one that I think stands little chance of being resolved between us. I suggest we leave it to others to decide for themselves which of our respective assessments (if either) they agree with.
Need? No. Benefit from? Yes.
It appears that you are under the impression that only the types you mentioned are capable of altruistic, rational thought. I see absolutely no evidence of this whatsoever, nor do I see any evidence that the types you mentioned are any more altruistic than any other type. Additionally, I see no evidence that the types you mentioned are inherently more "rational" than the other types.
While it can be argued that NT types are generally more logically minded than other T types and the F types, this does not raise *NTJs to a higher level of "rationality" than the other NTs. Plus, Feeling types are just as capable of reason as any Thinking type, it's just that their reasoning places a higher emphasis on people's emotional well-being than the reasoning of T types does. (There are, of course, exceptions to this rule. Some Feelers can be entirely unreasonable, as can some Thinkers. Plus, even average Thinkers can place just as high a level of value on people's emotional well-being as Feelers do, as long as that makes logical sense to them. Equally, even average Feelers can place just as high a level of value on objectivity as Thinkers do, as long as they think that that is what would be best for people's emotional well-being).
All the types are subject to subjective decision making, that includes *NTJs as they rely primarily upon Ni as their primary perceiving function, a function that due to its nature makes leaps of perception without having all the facts, something which is inherently irrational.
Well, whether I agree with him or not, whether I think his arguments and opinions are well thought out or not, he has the right to air them and defend them. As long as the discussion remains open and drama-free (which so far it has) I for one don't have any problem with this thread's continued existence.
I'm not deleting the thread, and even I disagree with the original idea.
Correct, it was based on opinion. The opinion was based on the ability of types to make rational decisions and the ability to get things done. We do need worker bees, but eventually they won't be necessary due to improved technology.
Benefit? Perhaps. Benefit most? Perhaps not.
Einstein still played a part in creating the atomic bomb. Yes, he was a genius (and a terrible father), but that doesn't get him off the hook.
Agree, we will not resolve our issue. Regardless, it's not worth talking about anymore and a resolution by facts won't help either of us in any way.
I still think the world needs more rationals.
If you look at the famous people of each type you will see the trend that the types I have described are altruistic. FDR and Nixon, both ENTJ, tried to improve the lives of their fellow man by altruistic means. By todays standards they would both be borderline socialists. Nixon started welfare and wanted the minimum allowance to be much much higher and also universal healthcare, which was only passed partially for children after many negotiations as CHIP. Look up the other types yourselves. Remember you are looking for trends, not just individuals. ... ruism.html
Hmm, I'm really not a fan of Perseus' theories, they seem to be based on a lot of ideas I just can't get behind. For example, he mentioned over on the ENTP forum that he thinks Ti is the primary cognitive function of all NTs, personally I think you'd have an extremely hard time finding a statistically significant amount of people who are familiar with the MBTI who would agree with that.
(Oh and FWIW, regardless of his politics Nixon's never struck me as an ENTJ, I'd have said ESTP).
I think a world with more ENTJ, INTJ or ENFJ would be a good thing for most Ns. Most Ns would be able to get more friends and partners.
I don’t think the ENTJ, INTJ or ENFJ would like it that much though, they would find more competition in their niche and so they would be relegated to the niches of other types, but they wouldn’t be as good as those other types so they would become redundant.
To make this happen I think genetically engineering babies would be easier than what you suggested.
I think it would be a good thing for the world to have more of these types, not just N's. I would like to find an MBTI density map. I believe that this map could potentially illustrate what I believe to be true... or it could prove me wrong. Let me know if any of any of you ever come across one!
First of all, it is not hereditary. If anything, when you try to dominate or decide what your child's temperament is, chances are, they will rebel and grow as a different type that is vastly different to how you wanted. They may become an SP for instance.
I know this because my mom is an ESTP while my dad is an ISTJ whom I share nothing in common with and hate talking to.
Secondly, to say that the other 13 types provide no value is vastly short sighted. How boring and lacking variety would the world be? Instead of antagonizing others, you should understand their motivation and values, only then can we synergize and do things more efficiently.
Personality is both hereditary and learned/developed through social interaction. I would like to put more emphasis on the hereditary side, as is accepted by the evidence provided in previous posts. Do you have any evidence to back up any of what you state to be true? (I'm not trying to be combative here, just trying to sort out the opinion from science).
The main idea behind this post is: The more smart, reasonable, and altruistic people the Earth has (rationals mostly), the better off it is.
Regarding your personal genetic situation, I believe a lot of ENTJs are actually ESTPs. Read some ESTP descriptions and see what you think (I wouldn't mind hearing what you think either)...
Do YOU have the evidence?
I know for a fact that in the nature AND nurture debate, it is accepted that our genetics gives us the flexibility to be all types, however it is the environment that leads us to favour one over the other. Only certain traits of personality such as shyness is inherited genetically, and even then, it can be overcomed by working on it. Note that I said certain traits, and not a temperament or personality at large. The only difference is that, it would be easier to some than others to overcome these traits depending on which sensative periods in your life that you are exposed to certain environmental stimuli that makes you the way you are.
On the ESTP comment, you may be right about some people. There are definitely some similarities. However I do see some discrepancies too. For example, I excelled in school and always liked new and interesting theories. Even though I do love fast pace, unpredictability and an ever changing experience, the way I approach things are mostly methodical and planned.
I do have evidence, which was posted earlier in this topic. The genetics side of the debate is overwhelmingly favored; nature always wins. You are and always will be the same type. Anyone can become funny or mean or cynical or social or antisocial... but deep down, your type is what you naturally prefer.
I really found this thread fascinating regardless of how much dispute their was.
I think by now we can say all types have their goods and bads, and we know that Iron Mickie sees both sides and would just like to see more of certain qualities in this world.
I believe in both sides of the nature/nurture argument. I think trying to change society by spreading our genes more would be a very large waste of time though. I think nurture plays a bigger part, but I still believe genes shouldn’t be forgotten.
Personally, I think that N is what needs to be worked on most. I think that we need to promote seeing the big picture and planning long term. I think it would create an effect similar to what IM was looking for, and it would be far more reasonable to plan out through schooling and other factors.
I appreciate your understanding of this argument/thread. I agree that N is the big fish. Long term planning/solutions are what will keep people safe, happy, creative, and productive. Everything can be changed with education, for better or worse. Every generation is a fresh start…
I am still sticking to my guns about XNTJs and ENFJ’s being the best for society, and that there should and can be more of them.
I think that the best society possible would have a majority XNTJs and a lesser majority ENFJs (that is with all of the other types included in smaller percentages). Anyway you slice it, decisions need to be made rationally without idealistic influence. What works works, and what doesn’t doesn’t; things change over time; don’t cling to what YOU want… the greater good is what needs to be looked after.
Born or learned I don’t know, but as I studied this topic I found this:
If NT’s are 1.5% of the population, out of 6 children my parents had 3 NT’s and at least one NF others undetermined
My offspring are ENTJ INTJ ENTP INFJ
I am an ENTJ
All my children were homeschooled, with a lot of logical discussion in school and after supper.
My wife is an …ISFP!!!
She definitly knows what it feels like to not fit in.
But as her life has progressed she has developed the N the T and the J sides of her as well as the E
But still when she tests, she is an ISFP as her preference
Genetic or enviorment? Interesting question.
How did she produce all N children?
How come none of her offspring were S’s at all if it is all genetic?
Research needs to be done.
My own observation of this subject is that most of the information is by observation and is very subjective.
Roger b. |
a86e985f357d3996 | Citation for this page in APA citation style. Close
Core Concepts
Adequate Determinism
Alternative Possibilities
Causa Sui
Causal Closure
Chance Not Direct Cause
Chaos Theory
The Cogito Model
Comprehensive Compatibilism
Conceptual Analysis
Could Do Otherwise
Default Responsibility
Determination Fallacy
Double Effect
Either Way
Emergent Determinism
Epistemic Freedom
Ethical Fallacy
Experimental Philosophy
Extreme Libertarianism
Event Has Many Causes
Frankfurt Cases
Free Choice
Freedom of Action
"Free Will"
Free Will Axiom
Free Will in Antiquity
Free Will Mechanisms
Free Will Requirements
Free Will Theorem
Future Contingency
Hard Incompatibilism
Idea of Freedom
Illusion of Determinism
Laplace's Demon
Liberty of Indifference
Libet Experiments
Master Argument
Modest Libertarianism
Moral Necessity
Moral Responsibility
Moral Sentiments
Paradigm Case
Random When?/Where?
Rational Fallacy
Same Circumstances
Science Advance Fallacy
Second Thoughts
Soft Causality
Special Relativity
Standard Argument
Temporal Sequence
Tertium Quid
Torn Decision
Two-Stage Models
Ultimate Responsibility
Up To Us
What If Dennett and Kane Did Otherwise?
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Lawrence Cahoone
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
Arthur Fine
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
Brian Goodwin
Joshua Greene
Jacques Hadamard
Mark Hadley
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jürgen Renn/a>
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
In information science, noise is generally the enemy of information. But some noise is the friend of freedom, since it is the source of novelty, of creativity and invention, and of variation in the biological gene pool. Too much noise is simply entropic and destructive. With the right level of noise, the cosmic creation process is not overcome by the chaos.
When information is stored in any structure, from galaxies to minds, two fundamental physical processes occur. First is a collapse of a quantum mechanical wave function. Second is a local decrease in the entropy corresponding to the increase in information. Entropy greater than that must be transferred away to satisfy the second law of thermodynamics.
If wave functions did not collapse, their evolution over time would be completely deterministic and information-preserving. Nothing new would emerge that was not implicitly present in the earlier states of the universe.
It is ironic that noise, in the form of quantum mechanical wave function collapses, should be the ultimate source of new information (low or negative entropy), the very opposite of noise (positive entropy).
Because quantum level processes introduce noise, information stored may have errors. When information is retrieved, it is again susceptible to noise, This may garble the information content.
Despite the continuous presence of noise around them and inside them, biological systems have maintained and increased their invariant information content over billions of generations. Humans increase our knowledge of the external world, despite logical, mathematical, and physical uncertainty. Biological and intellectual information handling balance random and orderly processes by means of sophisticated error detection and correction schemes. The scheme we use to correct human knowledge is science, a combination of freely invented theories and adequately determined experiments.
In Biology
Molecular biologists have assured neuroscientists for years that the molecular structures involved in neurons are too large to be affected significantly by quantum noise.
But neurobiologists know very well that there is noise in the nervous system in the form of spontaneous firings of an action potential spike, thought to be the result of random chemical changes at the synapses. This may or may not be quantum noise amplified to the macroscopic level.
But there is no problem imagining a role for randomness in the brain in the form of quantum level noise that affects the communication of knowledge. Noise can introduce random errors into stored memories. Noise can create random associations of ideas during memory recall.
Molecular biologists know that while most biological structures are remarkably stable, and thus adequately determined, quantum effects drive the mutations that provide variation in the gene pool. So our question is how the typical structures of the brain have evolved to deal with microscopic, atomic level, noise - both thermal and quantal noise. Can they ignore it because they are adequately determined large objects, or might they have remained sensitive to the noise for some reason?
We can expect that if quantum noise, or even ordinary thermal noise, offered beneficial advantages, there would have been evolutionary pressure to take advantage of noise.
Proof that our sensory organs have evolved until they are working at or near quantum limits is evidenced by the eye's ability to detect a single photon (a quantum of light energy), and the nose's ability to smell a single molecule.
Biology provides many examples of ergodic creative processes following a trial and error model. They harness chance as a possibility generator, followed by an adequately determined selection mechanism with implicit information-value criteria.
Darwinian evolution is the first and greatest example of a two-stage creative process, random variation followed by critical selection, but we will consider briefly two other such processes. Both are analogous to our two-stage Cogito model for the mind. One is at the heart of the immune system, the other provides quality control in protein/enzyme factories.
Noise in the Cogito model
The insoluble problem for previous two-stage models has been to explain how a random event in the brain can be timed and located - perfectly synchronized! - so as to be relevant to a specific decision. The answer is it cannot be, for the simple reason that quantum events are totally unpredictable.
The Cogito solution is not single random events, one per decision, but many random events in the brain as a result of ever-present noise, both quantum and thermal noise, that is inherent in any information storage and communication system.
The mind, like all biological systems, has evolved in the presence of constant noise and is able to ignore that noise, unless the noise provides a significant competitive advantage, which it clearly does as the basis for freedom and creativity.
The only reasonable model for an indeterministic contribution is ever-present noise throughout the neural circuitry. We call it the Micro Mind.
Quantum (and even some thermal) noise in the neurons is all we need to supply random unpredictable alternative possibilities.
And indeterminism is NOT involved in the de-liberating Will.
The major difference between Micro and Macro is how they process noise in the brain circuits. The first accepts it, the second suppresses it.
Our "adequately determined" Macro Mind can overcome the noise whenever it needs to make a determination on thought or action.
White Noise and Pink Noise.
Noise (specifically audio noise) is described as having a color when the amount of power (energy) in different frequencies is not uniform. By analogy with the amount of energy in different light frequencies (or wavelengths), when the energy is larger than average in longer wavelengths (the red part of the visual spectrum), then the noise is called "pink," although there is nothing visual.
Computer-generated noise may consist of random binary number sequences (1's and 0's). As long as the sequence is random, no statistical correlations or detectable patterns in the sequence, it is described as white noise.
The Wiener process, is a mathematical construct based on white noise with a Gaussian probability distribution.
Many naturally occurring processes exhibit white noise, including the Brownian motion of tiny particles suspended in a liquid. The atmosphere is considered a source of random white noise by They use radio antennae tuned between radio stations to generate random digit patterns from "atmospheric" white noise.
Whether this noise is genuinely random in the sense of irreducible quantum randomness is a question of the relationship between thermal noise and quantal noise.
Ultimately, this relationship depends on whether a classical gas is entirely deterministic (cf., deterministic chaos), and whether binary collisions of gas particles can be treated deterministically or must be treated quantum mechanically. If they are deterministic, then collisions are in principle time reversible.
In quantum mechanics, microscopic time reversibility is taken to mean that the deterministic linear Schrödinger equation is time reversible.
A careful quantum analysis shows that ideal reversibility fails even in the simplest conditions - the case of two particles in collision.
When they collide, even structureless particles should not be treated as individual particles with single-particle wave functions, but as a single system with a two-particle wave function, because they are now entangled.
Treating two atoms as a temporary molecule means we must use molecular, rather than atomic, wave functions. The quantum description of the molecule now transforms the six independent degrees of freedom into three for the molecule's center of mass and three more that describe vibrational and rotational quantum states.
The possibility of quantum transitions between closely spaced vibrational and rotational energy levels in the "quasi-molecule' introduces uncertainty, which could be different for the hypothetical perfectly reversed path.
Stochastic Noise.
In probability theory, stochastic processes are random (indeterministic) processes that are contrasted with deterministic processes.
Robert Kane on Noise
In his latest attempts to find the location of where and when indeterminism contributes to free will, Kane suggests that it is noise. But the noise does not contribute randomness to generating alternative possibilities, as in our Cogito two-stage model. Instead, noise just interferes with decisions and makes them more difficult!
"As it happens, on my libertarian account of free will, one does not need large-scale indeterminism in the brain, in the form, say, of macro-level wave function collapses (in the manner of the Penrose/Hameroff view mentioned by Vargas). Minute indeterminacies in the timings of firings of indeterminism neurons would suffice, because the indeterminism in my view plays only an interfering role, in the form of background noise. Indeterminism does not have to "do the deed" on its own, so to speak. One does not need a downpour of indeterminism in the brain, or a thunderclap, to get free will. Just a sprinkle will do."
Four Views on Free Will, Fischer et al., p.183)
For Teachers
For Scholars
Part Three - Value Part Five - Problems
Normal | Teacher | Scholar |
5d7389050a21f1ad | « first day (3063 days earlier) last day (61 days later) »
3:02 AM
Yo anyone around here good at classical Hamiltonian physics?
You can kinda express Hamilton's equations of motion like this:
$$ \frac{d}{dt} \left( \begin{array}{c} x \\ p \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) \nabla H(x,p) \, .$$
Is there a decent way to understand coordinate transformations in this representation?
(By the way, the incredible similarity between that equation and Schrodinger's equation is pretty cool. That matrix there behaves a lot like the complex unit $i$ in that it is a rotation by 90 degrees and has eigenvalues $\pm i$.)
3:41 AM
schroedingers eqn has strong parallel(s) to the wave eqn of classical physics (part of its inception) but it seems this connection is rarely pointed out/ emphasized/ seriously investigated anywhere...
2 hours later…
5:54 AM
Q: Connection between Schrödinger equation and heat equation
Kevin KwokIf we do the wick rotation such that τ = it, then Schrödinger equation, say of a free particle, does have the same form of heat equation. However, it is clear that it admits the wave solution so it is sensible to call it a wave equation. Whether we should treat it as a wave equation or a heat e...
3 hours later…
8:26 AM
@JohnRennie physics.stackexchange.com/questions/468349/… What Georgio said. Can't you add additional dupe targets? Gold badgers on SO can, but maybe they haven't extended that functionality to Physics.SE.
@PM2Ring fixed! :-)
That was quick! Thanks.
9:04 AM
This question prompted me to do a price check on osmium. I was surprised that it's relatively cheap compared to the rest of the platinum group, but that seems to be because it has a small & steady supply & demand, so it's not attractive to traders in precious metals.
I guess the toxicity of its oxide (& other tetravalent compounds) is also a disincentive. ;) Wikipedia says it sells for around $1000 USD per troy ounce, but other sites quote a starting price of $400.
These guys sell nice looking ingots of just about every metal you can think of, apart from the alkalis & radioactives. I think I'll pass on the osmium & iridium, but I suppose I could afford a 1 oz tungsten ingot. :) Sure, it's not quite as dense as osmium / iridium, but its density is still pretty impressive.
1 hour later…
10:25 AM
@DanielSank the term you want to look for is "complexification" of a symplectic manifold
also this:
In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space. Every complex vector space can be equipped with a compatible complex structure, however, there is in general no canonical such structure. Complex structures have applications in representation theory as well as in complex geometry where they play an essential role in the definition of almost complex manifolds, by contrast to complex...
I thought Arnol'd had a more in-depth discussion, but there's only a brief mention in §41.E
1 hour later…
11:44 AM
I just don't get why someone would think that proving a mathematical thing like Pythagoras' theorem is something that physics can do. physics.stackexchange.com/questions/468504/… I guess it'd be reasonable in Euclid's day, or even Newton's, but certainly not since the development of non-Euclidean geometries.
2 hours later…
1:33 PM
Hi. Am I wrong about this solution?
this question is from tensor algebra
Hi, @Leyla. I hope you don't think my previous reply is rude, but it's much better if you write equations in MathJax. My old eyes can barely read the equations in that photo, especially on my phone. And MathJax is a lot easier to search than equations in images.
@PM2Ring Ohh sure, if it's the case then I can type them in mathjax
There are bookmarklets & extensions that can be used to render MathJax in chatrooms. Stack Exchange won't build it into chatrooms because they don't want to impose the overhead on chat users...
@LeylaAlkan in a word, yes
your formulation is incorrect
unless there's crucial bits of context that you've omitted, the tensor cannot be assumed to be symmetric
indeed if $t_{ijk]$ were indeed totally symmetric, then $t_{(ijk)}$ would be identically zero and there would be no need to consider it
you're correct as far as $$t_{[321]} + t_{(321)} = \frac{2}{3!} \left[ t_{321} + t_{213} + t_{132} \right] $$ goes, but that's as far as you can take the calculation
this is enough to ensure that $t_{321} \neq t_{[321]} + t_{(321)} $ for an arbitrary rank-three tensor
particularly because it is perfectly possible for there to exist a rank-three tensor $t$ and a reference frame $R$ such that the components of $t$ on $R$ are such that $t_{321}=1$ and the rest of its components vanish.
2:08 PM
> Write out $t_{(321)}$ and $t_{[321]}$ .
Show that $t_{321}\neq t_{(321)}+t_{[321]}$
My solution:
$t_{(321)}=\frac 1 {3!}(t_{321}+t_{312}+t_{231}+t_{213}+t_{132}+t_{123})$
$t_{[321]}=\frac 1 {3!}(t_{321}-t_{312}-t_{231}+t_{213}+t_{132}-t_{123})$
$t_{(321)}+t_{[321]}=\frac 1 {3!}2(t_{321}+t_{213}+t_{132})$
Since $(3,0)$ tensor $t_{ijk}$ is totally symmetric, so it's independent of ordering of indices.
So,$t_{(321)}+t_{[321]}=\frac 1 {3!}2(t_{321}+t_{321}+t_{321})=t_{321}$
This how I done it first.
For @PM2Ring
Oh okay great @EmilioPisanty
2:32 PM
@LeylaAlkan tensors are just vectors in a vector space. It's extremely important that you understand how these linear-independence and linearity arguments work, and that you get comfortable in producing them when they're needed.
i.e. the core take-home message you should be extracting from this is how the counter-example was generated and why it works.
1 hour later…
3:47 PM
@JohnRennie what do you mean by superposition?
4:08 PM
@Akash.B it's like position
only better
1 hour later…
5:33 PM
@DanielSank What exactly do you want to understand? Any canonical transformation is just going to leave that equation unchanged, right?
6:19 PM
Why are superstrings so hard
@EmilioPisanty Excellent
@ACuriousMind I suppose, but I'm trying to see it algebraically. In some sense I'm asking how to represent a canonical transformation in the notation used in my comment.
6:54 PM
@DanielSank in general?
it'll just be an arbitrary function
your notation won't be helped much
the cases where it gets interesting is if you want a linear transformation
in which case it's required to be symplectic
does that keyword get you closer to the core of your question?
Q: When is separating the total wavefunction into a space part and a spin part possible?
mithusengupta123The total wavefunction of an electron $\psi(\vec{r},s)$ can always be written as $$\psi(\vec{r},s)=\phi(\vec{r})\zeta_{s,m_s}$$ where $\phi(\vec{r})$ is the space part and $\zeta_{s,m_s}$ is the spin part of the total wavefunction $\psi(\vec{r},s)$. In my notation, $s=1/2, m_s=\pm 1/2$. Questio...
in other news, this random thing has been on HNQ for most of the day
colour me perplexed.
I mean, not that I don't appreciate the rep-cap hit
but still
I do look forward to the gibbous-moon question getting flooded with outsiders, though =P
is the weak isospin (quantum number) the so-called flavor (quantum number)?
7:28 PM
or does flavor (quantum number) also involve weak hypercharge (quantum number)?
7:56 PM
I don't know if there is such a rule that only particles with nonzero flavor would undergo weak interaction. I read from [Wikipedia-Weak isospin](https://en.wikipedia.org/wiki/Weak_isospin) that "Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have $T = T_3 = 0$ and form singlets that do not undergo weak interactions." and "... all the electroweak bosons have weak hypercharge $Y_ w = 0$
, so unlike gluons and the color force, the electroweak bosons are unaffected by the force they mediate."
but $W^+$ has weak isospin 1 and $W^-$ has weak isospin -1, not zero, so they should participate weak interaction.
so I am confused as to what quantum number determines whether a particle participates weak interaction.
@EmilioPisanty I guess I'm asking how to transform the gradient.
Suppose I pick new variables that are related to the previous ones through a linear transformation. I know what to do on the left hand side, but on the right I have to do something to the gradient.
8:16 PM
When two harmonic waves going opposite directions collide do the completely cancel each other out?
1 hour later…
9:26 PM
can we really assume spinors are more fundamental than vectors?
if a manifold by chance doesn't admit spin structures, can we still assume spinors are more fundamental than vectors?
but if a manifold doesn't admit spin structures, how do you discuss fermions?
1 hour later…
10:47 PM
@DanielSank that's what the chain rule is for, right?
11:09 PM
@EmilioPisanty Yeah yeah fine I get the point. "Do the damned calculation yourself."
11:44 PM
@CaptainBohemian If a manifold does not admit spinors, you don't discuss spinors.
|
68c602dc3c2fbd5e | AOSIS OpenJournalshttp://www.openjournals.netinfo@openjournals.net
<article_title>Can matter and spirit be mediated through language? Some insights from Johann Georg Hamann
</article_title>The Enlightenment introduced to European philosophy and thought-patterns the strict dichotomy between res extensa and res cogitans; that is, matter and spirit. How to overcome the dichotomy and conceive of the interactions between these planes of reality has since become an overarching issue for philosophers. The theory of evolution, as founded by Charles Darwin, understands human beings, with their ability to think, to have arisen in the evolutionary process. Neuroscience utilises insights from the theory of complex systems to attempt to understand how perception, thought and self-awareness can arise as a consequence of the complex system that is the brain. However, already at the height of the Enlightenment, a contemporary and critic of Immanuel Kant, Johann Georg Hamann, suggested a metaphor for understanding the interrelationship of matter and thought. This metaphor is language. The appropriateness of this metaphor can be seen both in the importance that language abilities play in the evolutionary transition to the human species and in the characteristics of complex adaptive systems.
Detlev Tönsing1,21School of Religion and Theology, University of KwaZulu-Natal, Pietermaritzburg Campus, South Africa
2Lutheran Theological Institute, University of KwaZulu-Natal, Pietermaritzburg Campus, South Africa
Detlev Tö
Private Bag X1, Scottsville 3209, South Africa
971681
© 2012The Authors. Licensee: AOSIS OpenJournals. This work is licensed under the Creative Commons Attribution License.
22 Oct. 2010
03 Mar. 2011
06 Feb. 2012
Tönsing, D., 2012, ‘Can matter and spirit be mediated through language? Some insights from Johann Georg Hamann’, HTS Teologiese Studies/Theological Studies 68(1), Art. #971, 5 pages.
IntroductionMind, matter, Calvin and Darwin
In 2009, the second centennial of Darwin’s birth, and the fifth of Calvin as theologian of the Holy Spirit, was celebrated. This conjunction can be used as an occasion to reflect on the relationship between spirit and the evolution of matter.
In this article, I will use the term ‘spirit’ in the sense of that which defines the human being and makes humans different from animals. This is in the sense of Genesis 2, where the breath of God makes the human being into what it essentially is. This passage, as well as the passage in Genesis 1 about the imago dei in humans, shows a link between the human spirit and the Spirit of God. At the end of this article, I will draw implications for what the concept of ‘Spirit of God’ may imply from my conclusions about the human spirit itself. The concept of the human spirit or soul has been widely and controversially discussed and I will not enter into discussions on the relationship between spirit and soul, their distinction, and whether one should distinguish between body and spirit or body, soul and spirit in humans. Instead, I will focus on the use of ‘spirit’ as that which defines humans as distinct from (other) animals.
Calvin (1843:171), as with Luther and medieval Catholicism, distinguishes the soul of the human being, which he acknowledges can also be termed ‘spirit’, from the body. It is this spirit that is the locus of the imago dei in humans and distinguishes humans from other beings. Examining the further distinctions of the soul – into intellect and will (Calvin 1843:180) – is beyond the scope of this article.
Transcendental categories and language
Kant and Hamann
In between the theories of Calvin and Darwin lie those of Descartes and Kant, representing the culmination of Enlightenment thought. The distinction between spirit and body, or matter, is developed further and entrenched as both Descartes and Kant separated the concept of spirit from matter – the res extensa from the res cogitans, the noumena from the phenomena.
Matter, with the main property of extension, is separated from intellect, which has the main ability of perception and thought. What is observed by the perceiving spirit is distinguished from the thing in itself. Kant reproduced the distinction between intellect and will within the spirit in his distinction between theoretical and practical reason in his two seminal works: Critique of pure reason (1781) and Critique of practical reason (1788). The division between spirit and matter was heightened to such an extent that one of the main difficulties of philosophy became how to explain the ability of human spirit to acquire information from the bodily senses and how the human spirit could effect instruction to the human body; that is, the mind–body problem (Carrier & Mittelstrass 1995:16, 26).
The consequences of this separation are present with us today and can be seen in the issue of the observer in quantum mechanics. The wave function develops continuously and predictably in accordance with the Schrödinger equation – until there is an observation, at which point the wave function, which assigns probabilities to the outcomes of possible observations, collapses to the definite state of one definite observed answer. Which answer is observed is not predictable, but probabilistic. The wave function then resumes its continuous development – until the next observation. What exactly constitutes an observation, and what this implies for unobserved states, is still a topic of discussion in interpretations of quantum mechanics (Laurikainen 1991:198; Shimony 2001:5).
Another consequence of the mind–body dualism is the relative devaluation of the body, as opposed to the intellect prevalent in Western culture. This can be seen in the relative value attached to manual and intellectual labour, as well as in the valuation of objects primarily in terms of the specific human contribution to their existence (Smith 1843 [1776]:20). Yet, in this conception, because value is given only to the human input in production, this devalues those aspects of nature untouched by humans. If that which is not made by humans is conceived as valueless, such a conception can be argued to be a major contributor to environmental degradation. Mind–body, spirit–matter dualism is therefore arguably at the root of some of the most important issues of our time. So how can this dualism be overcome?
As a contemporary critic and friend of Immanuel Kant, Johann Georg Hamann may contribute to finding an approach to answering this question. Hamann assisted Kant in the publication of the Critique of pure reason. Having therefore had access to the text before publication, he wrote a response very soon after its appearance, although this was only published much later in respect for his friendship with Kant. He named this response Metakritik über den Purismus der Venunft (1825 [1784]), thereby coining the term ‘metacritique’. He concludes this response, a mere 14 pages long in comparison with the 440 pages of the Kant’s work, with the words:
This last possibility to draw the form of an empirical perception without object or sign from the pure and empty property of our external and internal mind is the Archimedean fulcrum ‘give me to stand’ and ‘origin of the deception’, the cornerstone of critical idealism and its tower- and lodgebuilding of pure reason. The given or taken materials belong to the categoric and idealistic forests, peripatetic and academic store rooms. The analysis is nothing more than a cut according to the fashion, as the synthesis is just an artful seam of a good leather- or clothes tailor. For the sake of the weak reader, I have interpreted that which the transcendental philosophy metagrabolises (sounds out at length) to the sacrament of language, the letters of its elements, the spirit of its institution. I leave everyone to unfold the closed fist into an open hand. (Hamman 1825 [1784]:16, [author’s own translation])
This passage, as with most of Hamann’s writing, needs interpretation – the closed fist needs to be expounded into an opened hand.
In his Critique of pure reason, Kant (1855 [1781]:31–43) attempts to demonstrate – with clarity and not hypothetically, but with apodictic certainty – that the true basis of thought lies in the universal conditions of perception which exist in all humans before the diverse vagaries of experience can arise: the categories of space, time, number and causality being the chief of these. In his response, Hamann (1825 [1784]:6) denies the possibility of universal reason, free from the vagaries of tradition and experience, as reason always depends on language and all language arises out of experience handed on, as sensus communis, in the process of tradition. Hamann (1825 [1784]:8) denies that a perfect, well-defined, abstract language without reference to everyday language is possible, calling this an ens rationis [pure thought] ‘nothing’. Consequently, he decries the whole project of universal and controlling reason of an isolated individual, which Kant (1855 [1781]:17) proposes, as ill-conceived (Hamann 1825 [1784]:9).
Instead, Hamann (1825 [1784]:9) starts with the insight that all thought is language and therefore participates in the particularity of experience and language. Universals are just particular concepts, arising out of particular experiences, which have been given very extensive fields of meaning. However, in language, the material – the sound of a syllable, the shape of a letter, of a word, becomes a carrier of meaning (Hamann 1825 [1784]:12). Hamann sees this joining of the material basis to the content of meaning as fundamentally analogous to that of the sacrament – where the meaning of the word of grace is joined to the material symbol of bread, wine or water. Hamann (1825 [1784]:16) suggests that the key to overcoming the matter–spirit dualism lies in the ‘sacrament of language’. Before we explore the consequences of such an approach, let us see whether there is evidence that would support this claim.
Language, mind and evolution
This question brings us to Darwin and the theory of evolution. Whilst the general philosophical approach in Darwin’s time still presupposed spirit–matter dualism, the theory of the evolution of humans presupposes a continuum between matter and spirit. It suggests that the human mind and spirit arose in a continuous process. At some stage, the beings that evolved from early apes, and later were our ancestors, began cognition and became human – as the name Homo sapiens implies. When did this occur? And what change made them human, cognitive, inspirited beings? What in matter can give rise to spirit? Indeed, the opposition to the suggestion of continuity between matter and spirit, between animal bodies and human minds, was one of the main reasons for contemporaries of Darwin to reject his theory (Peacocke 1985:101−102).
In human palaeontology, tool manufacture had often been used as the indicator of humanness because it implies planning and self-awareness. This approach corresponds to Kant’s interpretation of what human spirit, human mind, is – a theoretical mind capable of accurately and objectively perceiving objects and a practical mind capable of manipulating objects to its will, capable of intention and planning (Kant 1855 [1781]:464). This corresponds to the definition of human beings common to much of philosophical discourse – a rational being with free will. The fundamental nature of the human being perceived in this approach corresponds to the names given to human ancestors: Homo habilis and Homo ergaster (Wood & Collard 1999:197).
However, recent research in palaeoanthropology indicates that tool use, and even tool construction, is an insufficient criterion for humanness. Studies indicate that crows and apes can also modify objects to serve as tools. Also, the development of artefacts, cultural settlements and complex behaviour suddenly increased in speed approximately fifty thousand years ago – indicating a qualitative shift from biological adaptation to cultural adaptation and cultural transmission of information. This is now taken as the true indication of the origin of humans as we experience ourselves. It is linked to the appearance of complex, grammatical language, associated with the Broca’s area of the brain (Cela-Conde & Marty 1998:448; Diamond 1992:141). Religious behaviour originates at the same time, indicated by burial rituals and cultural construction, by the decoration of artefacts and production of paintings, as well as the construction of musical instruments (Ambrose 2001:1749; Cross, Zubrow & Cowan 2002:28; Mcbrearty & Brooks 2000:458;). Therefore, although it is more difficult to detect than tool construction, language use arguably is the defining characteristic of being human, much more so than tool use or construction, which does not seem to denote an equally dramatic shift from pre-human hominids. Spirit seems therefore to be linked to language, in terms of the evolutionary origin of human beings.
One intriguing fact in human evolution is the extraordinary capacity, complexity and information processing ability of the human mind. The brain evolved in a hunter-gatherer society, where the information-processing needs of the average member were vastly less than those required of humans in our post-industrial information society. The brain has not changed dramatically in structure since then. The same brain that today can devise sub-quantum physical theories, process thousands of pages or screens of information, then only needed to do a little more than the average baboon still does today – gather tubers and insects. The ability to devise theories that span the evolution of the universe, that can project back 15 billion years and forward as equally long, and construct devices and societies as complex as supercomputers and mega-cities, seems to outweigh by far the evolutionary needs of the hunter-gatherers in which this mind evolved. Added to this is the significant evolutionary cost of a large brain: the problem of giving early birth to large-sculled children, caring for them in a long infancy, and the fact that the brain consumes an inordinate proportion of the energy of the body: 20% − 30%. The one parallel in other species of an organ that rapidly develops in size and complexity beyond the direct needs of survival is those that show sexual competence and health (Miller 2000:130−132). Some examples of this are the tails and crests of wydahs, paradise birds and peacocks, as well as the antlers of elk. The hypothesis therefore is that brain size and function evolved to demonstrate health and ability as a preferential partner in mate selection. Yet how was this brain size demonstrated? Rapid growth in brain size occurred directly after the formation of language areas and so it therefore stands to reason that brain size was demonstrated through linguistic ability in the mate-selection process. The implication is that we developed our mind in order to compose love songs, not in order to make tools.
The second linkage of matter and spirit, according to science, originates in the area of complexity. Instead of asking how our humanness arose historically in the process of evolution, the question here is how does our humanness actually arise out of the way we are made? How does the mind that I am arise out of the body that I am, too? Whilst the interaction of mind and body had, in the history of philosophy, been linked to the pineal gland or explained in terms of pre-stabilised harmonies of monads, the attempt of modern neuroscience to answer this question lies in the theory of complex, non-linear systems. The theory of complex systems indicates that these systems develop a mode of behaviour wherein the system behaves as a whole and its behaviour must be studied on the level of the system as a whole, using different categories from the study of the parts of the system (Clayton 2006:677, 681; Peacocke 1986:28−29, 90−91, 1993:224−225, 2000:135). This mode of behaviour is also called supervenience or emergence. A typical feature of this is that the higher level reality – the meaning – can be instantiated in different ways in the lower level – the representation – and that the representation level, whilst constrained by lower-level laws, is flexible enough to take different configurations that are of equal energy, but carry different meaning. There is no effective difference between different arrangements of the bases in a DNA string, but the different arrangements carry different information. This is therefore similar to the relationship between the physical representation of words in sounds or written signs and the meaning of those signs in a certain language-context (Murphy 1998:476−477).
Humanity and language, rationality and relationality
If, then, it can be reasonably held that the humanity of human beings lies in their ability to use language in pursuit of relationships, and not primarily in their ability to grasp the world rationally nor in their ability to manipulate the world technically, what then would be the consequences for the conception of humanity? This conception would define humans not purely as rational beings, but rather as relational beings, thus coming closer to the African understanding of muntu ngumuntu ngabantu. Language is not something that can be used purely to describe the world (Miller 2000:359), something purely theoretical and interpreted in the mode of seeing as the primary sense, but rather something relational, with hearing – and answering – being the fundamental sense and mode of perception. The implication is that the human is not a distant and unrelated observer, nor a controlling artificer, but a communicant, a partner and participator in a communicative process (Hamann 1822 [1760]:261).
Following this, human self-understanding would shift by necessity and the source of self-worth would have to be redefined. I am more human not when I know more, nor when I have more power, but when I relate more and deeper through the language I use. This would especially be true if these relationships are truly relationships of communication and not relationships of power exercise – for in such relationships, both parties would be, and would make themselves, vulnerable to one another and, through these relationships, a community would be created where the search for the common good would outweigh the competition for position. In our broken world, this may be a simplistic expectation – and real relationships, in our experience, most often contain struggles for power and recognition; however, a tendency towards humans being viewed more relationally and less in terms of abilities may be indicated by this perspective.
If this would be accepted as basis of society, then our society would spend less effort on controlling reality and exploiting it for knowledge or goods or power and more on the development of relationships. Even knowledge would be conceived of differently – for the ideal of knowledge in the age of science is that originating from Francis Bacon (1825:219), who defined knowledge in terms that related it to technical mastery over the world: knowledge is power. Knowledge based on a fundamentally relationally conceived language would be closer to the Hebrew understanding of yad’a, where knowledge is the establishment of an intimate and understanding relationship between knower and known. I believe this shift would result in a more respectful approach to our world, which could result in a healthier relation to our environment – which would then lead to a better chance of our survival on this planet.
Relationality, language and God’s Spirit
Furthermore, if that which makes us truly human is the image of God in us, and our spirit is an echo of the Spirit of God, then our concept of God would shift as well. Whilst humans who conceptualise their humanity fundamentally as their ability to discern or master over other creatures would define God in terms of mastery over creation, those who would understand themselves fundamentally as relational beings would conceive of God rather as one who is deeply relational. The classical theistic definition of God – in terms of omnipotence and omniscience – defines God in terms of power – and is rightly criticised by Feuerbach (1981:262). The conception of God in terms of relations corresponds better, in my view, to the God of Jesus of Nazareth, whose main interest is the establishment of relationships with us and who is, as Trinity, fundamentally relational. Of course, part of entering into a relationship is becoming vulnerable – contrary to the detached observer or the master, who is invulnerable. The vulnerability of God, God’s pain at broken relationships, can clearly be seen in the biblical witness, from Hosea to Jesus (Fretheim 1984:155; Moltmann 1972:261).
This understanding of God also has consequences for our fundamental approach to the world. In the 18th and 19th centuries, an age where God was conceived of as Master and designer, the world was conceived of as subject and machine, subjected to absolute and unchanging laws (Barrett 2000:58). However, if God’s Spirit that penetrates and underlies the created order is conceived of in terms of language, then the world, too, will be interpreted in the metaphor of language and relationship. The laws of this world will be conceptualised as being akin to the laws of grammar and good style, rather than as the absolute laws of an absolute monarch. The laws of grammar underlie the possibility of expressing meaning in language – but they are, in their particular shape, neither necessary, nor is obedience to them absolutely mandatory – for whilst wholesale disregard for the rules of grammar destroys meaningful communication, the rules of grammar and style can be bent or broken occasionally with poetic license, when it serves the communication of meaning by an artful author (Hamann 1821 [1758]:138, 1822 [1759a]:17, 1821 [1759b]:508). In this conception, attention is focused away from an exclusive concern with the laws and onto an attempt to understand the meaning of the writing. A world seen only in terms of absolute laws has no meaning – but a world in which the laws of grammar undergird the communication of a particular text can have deep and profound meaning in its overall development, for each of its parts can contribute to that meaning.
Does such a description fit our understanding of the world? The sciences of complexity indicate that it does, that the laws of this world are precisely such as to allow construction of intricate patterns that are not predetermined by the laws, but can develop because of the stability and freedom these laws provide. This balance between freedom and structure is sometimes referred to as the edge of chaos (Gutowitz & Langton 1995:52; Miller & Scott 2007:129).
Can we understand this meaning? Can we read the language of the world? To understand a language, one needs a key, a Rosetta stone. For, in language, individual patterns or words do not have intrinsic meaning – rather, their meaning is fixed by usage. There is nothing in the letters ‘wand’ that predestines one to interpret them as a side of a room, in German, or as a magic stick, in English. Meaning is a priori arbitrary, but a posteriori necessary. What can be the Rosetta stone for understanding the meaning of history? Hamann answers that it is revelation, specifically the revelation of God in Christ that allows us to find the key to the language of history (Hamann 1821 [1758]:148, 1822 [1760]:263). The difference, then, between a believer and an unbeliever in the interpretation of the world is simply that between someone who has had access to the key to learn to understand the language and one who has not.
Language as sacrament
But why call language a sacrament? Hamann (1825 [1784]:16) does so in reference to the definition of a sacrament that underlies the Lutheran understanding and is often quoted in Lutheran writings: ‘accedat verbum ad elementum et fit sacramentum’ (Luther 2000 [1530]:468). When the word, or more properly, the meaning, is joined to the element, the sacrament results. In this perspective, a sacrament has three constituent aspects: a material sign, a meaning that is attached to this, which relates subsequently to the gospel of the gracious self-communication of God. These three elements can be seen in language. Indeed, is language not a prime example of the joining of meaning to a material entity; that is, an auditory signal or a written sign? And is language, after all we have said, not a sign of the gracious communication of God, who desires relationship and has given himself to us in a world that is suited for, and geared toward, the establishment of relationships? Therefore, does language, by its very existence, not give us an indication of the meaning of the world – a continuous growth in complexity and in the depth and extent of relationships amongst the beings in it?
If language can be regarded as indicative of the communication of God with us – in the world, in the word and in the central sacrament of the incarnation of Christ – then language itself participates in the sacramental nature of the word, the word that was in the beginning, was God, and yet became flesh. It is the nearness of language to this incarnated Word that gives it sacramental character. In this sacramental character, we see both God’s self-communication as the ultimate source of language – for God created the world such that it has the character of language, so that he may communicate through it and with it – and God’s intention of establishing communication embodied in relationships within God’s creation. It is in such a relationship, in such a communication, that the dichotomy of matter and spirit is overcome. The consequences of this approach have been indicated in the development of this argument: if the meaning of the world is seen in language, then relationships of communication – and self-communication – become essentially important, rather than those of abstraction or manipulation. Such an approach can contribute to a healing of the divisions engendered by the modern dualisms.
Competing interest
The author declares that he has no financial or personal relationship(s) which may have inappropriately influenced him in writing this article.
1. works of Francis Bacon, Lord Chancellor of England: A new editionvol. 1, ed. B. Montague, Esq., William Pickering, London.
3.BarrettP.2000Science and Theology since Copernicus: The search for understandingUniversity of South Africa, Pretoria.
4.CalvinJ.1843Institutes of the Christian religionvol. 1, transl. J. Allen, Presbyterian Board of Publication, Philadelphia, PA.
5.CarrierM.MittelstrassJ.1995Mind, brain, behavior: The mind–body problem and the philosophy of psychologyWalter de Gruyter, Berlin.
6.Cela-Conde C.MartyG.1998Beyond biological evolution: Mind, morals and culture in R.J. Russell, W.R. Stoeger & F.J. Ayala (eds.), Evolutionary and molecular biology: Scientific perspectives on divine action, pp. 445−451, Vatican Observatory, Rome.
7. E. CowanF.2002Musical behaviours and the archaeological record: A preliminary studyin J. Mathieu (ed.), Experimental Archaeology: British archaeological reports international, ser. 1035, pp. 25−34, Archaeopress, Oxford.
9.DiamondJ.1992The third chimpanzee: The evolution and future of the human animalHarper Perennial, New York, NY.
10.FeuerbachL.1981Gesammelte Werkevol. IV, ed. W. Schuffenhauer, Akademie-Verlag, Berlin.
11.FretheimT.1984The suffering of GodFortress, Philadelphia, PA. 12.GutowitzH.LangtonC.1995Mean field theory of the edge of chaosin F. Morán (ed.), Advances in artificial life: Third European conference on artificial life proceedings, Granada, Spain, June 04−06, 1995, pp. 52−64, Springer, Berlin.
13.Hamann J.G.1821 [1758]Brocken in Hamann’s Schriften, vol. I, ed. F. Roth, pp. 125–148, G. Reimer, Berlin. 14.Hamann J.G.1822 [1759a]Sokratische Denkwürdigkeitenin Hamann’s Schriften, vol. II, ed. F. Roth, pp. 1–50, G. Reimer, Berlin.
15.HamannJ.G.1821 [1759b]To Kant, 30 Okt 1759 in Hamann’s Schriften, vol. I, ed. F. Roth, pp. 508–509, G. Reimer, Berlin.
16.Hamann J.G.1822 [1760]Aesthetica in nuce in Hamann’s Schriften, vol. II, ed. F. Roth, pp. 255–308, G. Reimer, Berlin.
17.HamannJ.G.1825 [1784],Metakritik über den Purismum der reinen Vernunftin Hamann’s Schriften, vol. VII, ed. F. Roth, pp. 1–15, G. Reimer, Berlin.
18.KantI.1855 [1781],Critique of pure reason transl. J.M.D Meiklejohn, Henry G. Bohn, London.
19.Kant I.1976 [1788]Critique of practical reason and other writings in moral philosophy transl. L.W. Beck, University of Chicago Press, Chicago, IL.
20.
21.LutherM.2000 [1530]The large Catechismin R. Kolb & T.J. Wengert (eds.), The Book of Concord: The confessions of the Evangelical Lutheran Church, pp. 377−480, Fortress, Minneapolis, MN.
22. mating mind: How sexual choice shaped the evolution of human natureHeinemann, London24.MillerJ.H.ScottE.P.2007Complex adaptive systems: An introduction to computational models of social lifePrinceton University Press, New York, NY.25.MoltmannJ.1972 Der gekreuzigte GottKaiser, Munich.
26.MurphyN.1998Superveniencein R.J. Russell, W.R. Stoeger & F.J. Ayala (eds.), Evolutionary and molecular biology: Scientific perspectives on divine action, pp. 474−478, Vatican Observatory, Rome.
27.PeacockeA.1985Biological evolution and Christian theology – Yesterday and todayin J. Durant (ed.), Darwinism and divinity, pp. 101−130, Blackwell, Oxford.
28.PeacockeA.1986 God and the new biologyJM Dent & Sons, London.
29.PeacockeA.1993Theology for a scientific ageSCM Press, London.
30.PeacockeA.2000Chance and lawin R.J. Russell, N. Murphy & A.R. Peacocke (eds.), Chaos and complexity: Scientific perspectives on divine action, pp. 123–146, Vatican Observatory, Rome.
31.Shimony A.2001The reality of the quantum worldin R.J. Russell, P. Clayton, K. Wegter-McNelly & J. Polkinghorne (eds.), Quantum mechanics: Scientific perspectives on divine action, pp. 3−16, Vatican Observatory, Rome.
32.Smith A. 1843 [1776]An inquiry into the nature and causes of the wealth of nations viewed 18 June 2009, from
33.<195::AID-EVAN1>3.0.CO;2-2 |
7c048e00fb647ae9 | Research themes
Japanese version is available here .
First-Principle calculation based on Quantum Monte Carlo
First-Principle Quantum Monte Carlo (FP-QMC) is an numerical method to solve the many-body Schrödinger equation using Monte Carlo integration. FP-QMC is one of the state-of-art first-principle methods, giving us the most exact electronic structure.
However, FP-QMC cannot be easily performed unlike DFT because sophisticated knowledge about condense matter, many-body theory, and high-performance computing are necessary to make use of a FP-QMC code. I am currently developing new theories and implementations for FP-QMC in Sandro Sorella’s lab, who is one of the most prominent researchers in the field of FQ-QMC, at SISSA (Italy).
• Submitted to Journal of Chemical Theory and Computation (2019).
Validation and verification of Density Functional Theory (DFT)
Recently, the materials informatics (MI) paradigm, in which novel materials are designed or searched for using techniques of information science and/or computational physics, has attracted much attention because of rapid improvements in computer and information science (including artificial intelligence, AI). The most important problem when applying AI to the field of materials is a lack of arranged databases for physical properties and functions, which is very different situation from board games and web services. Then, high-throughput ab initio calculations of physical properties are often performed to create large-scale databases for machine learning or data mining in MI.
Density functional theory (DFT) seems a promising framework in which to perform quantitative evaluations of physical properties. It is, however, sometimes unable to reproduce experimental results owing to the limitation such as a failure to take exchange-correlation effects into account, a lack of excited-state information, or the unavailability of an appropriate model. Therefore, it is very important to investigate whether or not DFT calculation can quantitatively predict a physical property even if the method has already been implemented in a DFT code.
To validate the predictive power, we are collecting experimental data and comparing the experimental data and calculation results.
Application of first-principle calculation
The materials informatics (MI) paradigm has recently attracted much attention. It stimulates researchers to try to use high-throughput first-principle calculations for designing a novel material. However, computational physics and chemistry have originally been used for revealing mechanisms so far, which is still a valid strategy for designing a novel material. I am applying first-principle calculations to compounds recently synthesized by experimental groups.
Novel superconductors in layered titanium-oxypnictides
Almost all metals show zero resistivity at very low temperature, which is called a critical temperature (Tc). This phenomenon was discovered in Hg by Kamerlingh Onnes in 1911 and named as “superconductivity” later. A lot of researchers have been looking for novel high-Tc superconductors, ultimately room-temperature superconductors. Although several high-Tc superconducting families have been found, no room-temperature one has been discovered so far.
I am currently developing new theories and implementations for FP-QMC, but I started my career as an experimental researcher. My supervisors (Prof. Kageyama and Dr. Yajima) gave me a mission to find a novel superconductor in titaninum compounds. Fortunately, we found several novel superconductors such as BaTi2Sb2O and BaTi2Bi2O which are categorized as layered titanium-oxypnictides. They have still been studied by many groups because their analogies of cuprates and iron-arsenides high-Tc superconductors. Although several experiments such as NMR and muSR had revealed the superconducting mechanism, details of structural disorders occurring at low temperature had not been detected by X-ray or neutron diffraction measurements. I applied first-principle phonon calculations to the layered titaninum-oxpnictides to reveal the low-temperature structures.
Collaboration with experimental groups
I am actively collaborating with experimental groups so that I can take advantage of my experience that I belonged to an experimental group and synthesized inorganic compounds. I advise how to use a DFT code or calculate electronic and/or phonon structures by myself. |
ea00d9b36770d947 | Tuesday, November 30, 2010
Putting Relativity to the Test
Albert Einstein lecturing in Vienna in 1921.
In the hundred plus years since Einstein published his theory of relativity it has been put to the test many times. General relativity—Einstein’s theory of gravity—has been tested by measuring the apparent shifting of position of stars whose light passes near the sun during a solar eclipse (known as gravitational lensing, discussed here). It has also been used to explain the wobble in Mercury’s orbit around the Sun.
Special relativity, the theory that introduced the concept of spacetime, and from which Einstein derived his famous equation E=mc2, has also been tested—most famously by Michelson and Morley (discussed here) who confirmed that the speed of light is constant as Einstein predicted. Another aspect of special relativity—time dilation—has been confirmed by comparing the time on an atomic clock sent on an around-the-world airplane trip with one left behind. There is a very small difference between the time for the two clocks which agrees precisely with Einstein’s theory.
Now it looks like relativity may again be tested, this time by physicists working at CERN who have managed to create and trap small quantities of antimatter by using very powerful magnetic fields in devices known as Penning traps. The antimatter they’ve created is antihydrogen. A normal atom of hydrogen consists of a positively-charged proton bound to a negatively-charged electron. Antihydrogen, however, is made by binding a negatively-charged antiproton to a positively-charged electron or positron. According to Einstein’s theory, if scientists can collect enough antihydrogen to do a spectrum analysis, it should match the spectrum for hydrogen exactly. It’s a long shot that the spectrums will differ, but if they do it would leave many physicists in shock, and quite likely earn the team that discovers it a Nobel Prize.
1) True of false: Einstein’s theory of relativity predicts that gravity can bend light.
2) Antihydrogen is created by binding an antiproton with a(n) _________________.
a) positron b) antineutron c) proton d) electron
3) True or false: a clock in motion runs slower than a clock at rest.
4) Antimatter has been contained in small quantities using a _________________.
a) strong electric field b) weak magnetic field c) Penning trap d) graduated cylinder
5) In E=mc2, the constant c stands for _________________.
For additional content, musing and discussions follow us on facebook.
Tuesday, November 23, 2010
An Evolutionary Snapshot
Falcarius skeleton reconstruction, courtesy of
Utah Museum of Natural History.
Packed into a two-acre excavation site near Green River, Utah are hundreds, maybe thousands, of fossilized remains of a new dinosaur species discovered back in 2001: Falcarius utahensis.
This species lived about 125 million years ago during the early Cretaceous Period. Falcarius walked on two legs. Adults measured about four meters from head to tail and were well over a meter tall. They had sharp, curved, claws measuring up to 15 cm in length and were probably covered with hairy feathers. It’s thought that these creatures are one of the ancestors to modern-day birds.
Artistic rendering of a Therizinosaur, courtesy of Nobu Tamura.
Yet the most interesting aspect of these dinosaurs is that they are thought to have been omnivores. Dr. Scott Sampson, chief curator at the Utah Museum of Natural History and coauthor of a study on Falcarius published in the May 2005 issue of Nature, is quoted as saying that Falcarius “...is the missing link between predatory dinosaurs and the bizarre plant-eating Therizinosaurs.”
Evidence that it ate plants includes a large pelvis bone to support a larger intestinal tract that’s needed for digesting plants. Falcarius also had leaf-shaped teeth which were ideal for eating plant material. Yet—like Velociraptor—they had sharp, curved claws for hunting, so it’s thought that they also caught and ate small animals.
The name Falcarius comes from Latin, and means sickle maker which aptly describes its unusual clawed limbs. The name may not be as catchy as T-Rex or Velociraptor but this dinosaur is definitely interesting.
1) True or false: Falcarius is thought to be an herbivore.
2) Falcarius is believed to have been covered with __________.
a) smooth skin b) scales c) fur d) hairy feathers
3) True or false: Falcarius had teeth that were well-suited for eating plants.
4) Falcarius is thought to be a missing link between carnivores and ____________.
a) Velociraptor b) lizards c) herbivores d) omnivores
5) Falcarius is Latin for _________.
Wednesday, November 17, 2010
Cliff Palace
Cliff Palace at Mesa Verde National Park
Mesa Verde National Park, located near the southwest corner of Colorado, is home to some of the best-preserved ancient cliff dwellings in the world. The largest structure is called Cliff Palace. It was built by the Anasazi people—ancient ancestors of today’s Puebloans—over 750 years ago.
Recent studies reveal that Cliff Palace had 150 rooms and 23 kivas, or rooms used for religious ceremonies. It’s estimated that about 100 people lived at Cliff Palace. This is quite a departure from the typical cliff dwellings found at Mesa Verde National Park, which contain from one to five rooms each, with many of the single room structures being used for storage. Archaeologists believe that Cliff Palace was used mainly for religious ceremonies.
Looking at the size of the doorways in Cliff Palace one realizes just how small the Anasazi people must have been—an average man being about 163 cm (5’4”) tall, and an average woman being about 152 cm (5’0”) tall. The Anasazi’s life span was relatively short, partly because of an exceptionally high infant mortality rate. Sadly, about half of their children died before their fifth birthday, with most adults only living into their mid- to late-30s. One can only imagine the hardships they must have faced!
Cliff Palace is mainly constructed of sandstone, mortar and wooden beams. The Anasazi would collect hard river rocks and use them to shape the larger sandstone blocks for the bulk of their structures. For mortar, they used a mixture of soil, water and ash. They would fill the gaps in the mortar with smaller “chinking” stones to add stability to the walls. Then they painted the surface of the walls with colored plaster made from mud and clay.
Ansel Adams visited Cliff Palace in 1941 and published several spectacular photographs which can be viewed online at the National Archives website, along with his other photographs of our national parks: www.archives.gov/research/ansel-adams
1) True or false: Puebloans are the ancient ancestors of the Anasazi peoples.
2) Cliff Palace was used mainly for _______________.
a) grain storage b) living quarters c) religious ceremonies d) recreation
3) True or false: The Anasazi people were much smaller than modern people.
4) Rocks used to increase the stability of the walls of Cliff Palace are called ____________.
a) sandstone blocks b) chinking stones c) river rocks d) mortar
5) Cliff Palace was photographed in 1941 by __________.
Tuesday, November 9, 2010
Little Green Men
A composite image of the Crab Nebula showing the
X-ray (blue), and optical (red) images superimposed.
Within the depths of the Crab Nebula there lies a beast. It is an object that is only 19 km in diameter and emits pulses of radio waves at a rate of 30 times per second. These pulses are so powerful that they light up the entire nebula. This object is the remnant of a supernova explosion, one of the most energetic events in the entire universe. It’s a pulsar—a rotating neutron star. A star so dense that a teaspoon of it would have a mass about 900 times that of the Great Pyramid of Giza. It’s so dense that the atoms within it have collapsed in the gravitational crush to the point where electrons have been pulled into the nucleus converting protons into neutrons.
A pulsar creates a magnetic field a million times more powerful than earth’s. It also creates powerful beams of electromagnetic radiation emanating from its two poles. The reason that pulsars pulse is because they are rotating. As they rotate, we detect these jet of radiation at regular intervals, much in the same way that a lighthouse works.
Because a pulsar rotates about an axis that is not
aligned with its magnetic poles, an observer will see
regular pulses of radiation as the magnetic poles
come in and out of sight.
The Crab Nebula pulsar was formed in the aftermath of the supernova explosion of 1054 AD and was recorded by Chinese and Arab astronomers at the time. For two years it was visible to the naked eye and at its peak it was the second brightest object in the night sky, being surpassed only by the moon. Thanks to these ancient astronomers this was the first recorded instance of a supernova explosion.
The first pulsar was discovered in 1967 by Jocelyn Bell and Antony Hewish. At first they were perplexed by the regularity of the pulses, and named their find LGM-1 which stands for little green men. Some thought that pulsars might be radio beacons from alien civilizations and it wasn’t until about a year later that astrophysicists were able to determine what was really going on.
Monday, November 1, 2010
Quantum Quotes
Much has been written about quantum mechanics and how difficult—if not impossible—it is to grasp its true meaning. Niels Bohr, the Danish physicist who won a Nobel Prize in 1922 for his contributions to the understanding of quantum mechanics, said “Those who are not shocked when they first come across quantum theory cannot possibly have understood it.” Noted physicist Richard Feynman echoed this sentiment by saying “I think I can safely say that nobody understands quantum mechanics.”
Schrödinger's cat is placed in a sealed box with a flask containing a poison and some radioactive material. If the Geiger counter detects radiation the flask is broken and poison is released killing the cat. Quantum mechanics can be interpreted to say that after a while the cat is both alive and dead.
Erwin Schrödinger, famous for his thought experiment where a cat could be simultaneously alive and dead depending on the occurrence of a random quantum event, said this regarding quantum theory: “I do not like it, and I am sorry I ever had anything to do with it.” Considering that he won a Nobel prize in 1933 for his famed Schrödinger equation which is central to quantum mechanics, one would hope he was joking! The aspect of quantum mechanics that particularly bothered Schrödinger is called “quantum leaping”, where an electron instantaneously jumps from point A to point C without ever passing through point B.
Perhaps the biggest critic of quantum theory was Einstein, who jokingly said “Marvelous, what ideas the young people have these days. But I don’t believe a word of it.” Eventually he was convinced that it did indeed have merit, but even then it was his belief that the ability to understand what’s actually happening at the subatomic level exceeds the mental powers of physicists. In a 1926 letter to Max Born, Einstein wrote in reference to quantum theory “I, at any rate, am convinced that God does not throw dice.” In response, Bohr famously said, “Einstein, stop telling God what to do.”
1) True or false: quantum theory is easily understood.
2) ______________ won a Nobel Prize for his quantum theory equation.
a) Albert Einstein b) Niels Bohr c) Richard Feynman d) Irwin
3) True or false: Einstein was a critic of quantum theory.
4) A mental exercise that would be difficult or impossible to perform is called a(n) _________________.
5) The ability of an electron to instantly jump from one location to another is known as _________________________.
For additional content, musing and discussions follow us on facebook. |
675613cd4d1abce9 | Non-singular cloaks allow mimesis
Non-singular cloaks allow mimesis
We design non-singular cloaks enabling objects to scatter waves like objects with smaller size and very different shapes. We consider the Schrödinger equation which is valid e.g. in the contexts of geometrical and quantum optics. More precisely, we introduce a generalized non-singular transformation for star domains, and numerically demonstrate that an object of nearly any given shape surrounded by a given cloak scatters waves in exactly the same way as a smaller object of another shape. When a source is located inside the cloak, it scatters waves as if it would be located some distance away from a small object. Moreover, the invisibility region actually hosts almost-trapped eigenstates. Mimetism is numerically shown to break down for the quantified energies associated with confined modes. If we further allow for non-isomorphic transformations, our approach leads to the design of quantum super-scatterers: a small size object surrounded by a quantum cloak described by a negative anisotropic heterogeneous effective mass and a negative spatially varying potential scatters matter waves like a larger nano-object of different shape. Potential applications might be for instance in quantum dots probing. The results within this paper as well as the corresponding derived constitutive tensors, are valid for cloaks with any arbitrary star shaped boundaries cross sections, although for numerical simulations, we use examples with piecewise linear or elliptic boundaries.
University of Liverpool. Department of Mathematical Sciences,
M.O. Building, Peach Street, Liverpool L69 3BX, UK
Email address:;
Institut Fresnel-CNRS (UMR 6133), University of Aix-Marseille,
case 162, F13397 Marseille Cedex 20, France.
Email address:;
1 Introduction
Control of electromagnetic waves can be achieved through coordinate transformations which bring exotic material parameters [1, 2, 3]. Electromagnetic metamaterials within which negative refraction and focussing effects involving the near field can occur [4, 6, 7, 8] can be understood in light of transformation optics [3].
Recently, an electron focussing effect across a p-n junction in a Graphene film, that mimics the Pendry-Veselago lens in optics has been proposed [9]. The subsequent theoretical demonstration of per cent transmission of cold rubidium atom through an array of sub de Broglie wavelength slits, brings the original continuous wave phenomenon in contact with the quantum world [10].
Other types of waves such as water waves can be controlled in a similar way using transformation acoustics [11], leading to invisibility cloaks for pressure waves thanks to the design of two-dimensional [12, 13] and three-dimensional cloaks [14, 15]. It has been further demonstrated that broadband cloaking of surface water waves can be achieved with a structured cloak [16]. Interestingly, cloaking can be further extended to in-plane elastic waves [11, 17] and bending waves in thin-plates [18].
In this paper, we focus our analysis on cloaking of quantum waves which involves spatially varying potentials and anisotropic effective mass of particles, as first proposed by the team of Zhang [19] and further mathematically studied by Greenleaf et al. [20]. We build up on the former proposal to render a quantum object smaller, larger, or even change its shape. Our point here is to apply the versatile tool of transformation physics in an area where the size of the object might have some dramatic changes in the physics: for instance, a quantum super-scatterer might enhance the interactions of quantum dots with the mesoscopic scale, thereby enabling quantum effects in metamaterials.
2 Transformed governing equations for matter waves
Following the proposal by Zhang et al. [19], we consider electrons in a crystal with slowly varying composition: is the spatially varying potential, the energy of the local band edge and a slowly varying external potential. In cylindrical coordinates with invariance, and letting the mass density be isotropic diagonal in these coordinates, the time independent Schrödinger equation takes the form
Here, is the Plank constant and is the wave function. Importantly, this equation is supplied with Neumann boundary conditions on the boundary of the object to be cloaked.
Let us consider a map from a co-ordinate system to the co-ordinate system given by the transformation characterized by , and . This change of co-ordinates is characterized by the transformation of the differentials through the Jacobian:
On a geometric point of view, the matrix is a representation of the metric tensor. The only thing to do in the transformed coordinates is to replace the effective mass (homogeneous and isotropic) and potential by equivalent ones. The effective mass becomes heterogeneous and anisotropic, while the potential gets a new expression. Their properties are given by [19]
where stands for the upper diagonal part of the inverse of and is the third diagonal entry of .
The transformed equation associated with the quantum mechanical scattering problem (1) reads
where importantly the energy remains unchanged and the wave function with . We note that satisfies the usual Sommerfeld radiation condition (also known as outgoing wave condition in the context of electromagnetic and acoustic waves) which ensures the existence and uniqueness of the solution to (4).
It is indeed the potential and the mass density tensor (e.g. involving ultracold atoms trapped in an optical lattice as proposed in [19]) which play the role of the quantum cloak at a given energy . However, there is a simple correspondence between the Schrödinger equation and the Helmholtz equation, the energy of the former being related to the wave frequency of the latter via (up to the normalization , with the wavespeed in the background medium, say vacuum). The present analysis thus covers cloaking of acoustic and electromagnetic waves governed by a Hemholtz equation. Correspondences bridging the current analysis with a model of transverse electric waves in cylindrical metamatrials are on the one hand and on the other hand.
3 Mathematical setup: Generalized cloaks for star domains
This section is dedicated to a mathematical model generalizing the blowup of a point to a transformation sending a domain to another, thus making the latter inherit the same electronic, electromagnetic or acoustic behavior as the former, depending upon the physical context. Although in this paper we will restrict ourselves to cloaking regions in the plane or Euclidean space, the transformation we propose can be readily extended to any star domain in that is, domains with a vantage point from which all points are within line-of-sight. In particular, the transformation still preserves all lines passing through that chosen fixed point.
Here is a description of the transformation in layman terms, but this could be formalised mutatis mutandis in very abstract mathematical settings by working directly with the divergence-form PDE of electrostatics [21] or as the Laplace-Beltrami equation of an associated Riemannian metric [22]. For simplicity, let us consider bounded star domains in with piecewise smooth arbitrary boundaries all sharing the same chosen vantage point, We suppose contains which in turn, contains Typically, is the domain to be made to mimic The new transformation will be the identity outside , that is, in , but will send the hollow region to the hollow region , in such a way that the boundary of will stay point-wise fixed, while that of will be mapped to The hollow region is meant to be the model for the cloak, endowed with Neumann conditions on its inner boundary , and in which any type of defect could be concealed, but will still have the same electronic (resp. electromagnetic or acoustic) response as the region with a potential wall (resp. infinitely conducting boundary or rigid obstacle) of boundary In practise, we may divide the domains into subdomains, the part of whose boundaries lying inside is a smooth arbitrary hypersurface.
However, in such an ideal cloaking, there is a dichotomy between generic values of the energy (resp. wave frequency ), for which the wave function must vanish within the cloaked region , and the discrete set of Neumann eigenvalues of , for which there exist trapped states: waves which are zero outside of and equal to a Neumann eigenfunction within . Such trapped modes have been discussed in [22] when vanishes.
Figure 1: Construction of a generalized non-singular cloak for mimetism. The transformation with inverse (6) shrinks the region bounded by the two surfaces and into the region bounded by and . The curvilinear metric inside the carpet (here, an orange flower) is described by the transformation matrix see (7)-(14). This is designed to play the double role of mimesis and cloaking: any types of quantum objects located within the region bounded by the surface will be invisible to an outer observer while the region itself still scatters matter waves like an object bounded by (here, a yellow star). In the limit of vanishing yellow region, the transformation matrix becomes singular on (ordinary invisibility cloak).
The transformation is constructed as follows. Consider a point of with relative to a system of coordinates centered at the chosen vantage point 0. The line passing through x and 0 meets the boundaries at the unique points x, x, x, respectively. We actually need the inverse of the transformation, in the coordinates system, it reads In the 3-space with coordinates we can write this transformation as
The cases of interest in this paper can all be considered as cylinders over some plane curves (triangular, square, elliptic, sun flower-like cylinders, etc.) Thus, we consider the transformation mapping the region enclosed between the cylinders and into the space between and as in Figure 1, whose inverse is
The matrix representation of the tensor is thus given by
where the coefficients can be expressed as
and finally the partial derivatives are as follows
Now after having derived the general formulas for mimesis, we turn to the numerical simulations. From formulas (6)-(14), in order to construct our cloak, we only need to know x, x, x and their respective derivatives. The explicit illustrations we have supplied to exemplify the work within this paper have boundaries whose horizontal plane sections are parts of ellipses (sunflower-like petal, cross, circle) or lines (parallelogram, hexagram, triangle).
4 Mimetism for non-singular cloaks
In Section 3 we presented the theoretical study of the mathematical model underlying our proposal for cloaks with any arbitrary star shaped boundaries cross sections, that perform mimetism as well as allowing invisibility. In the present section, we illustrate this by examples with piecewise linear or elliptic boundaries and provide their numerical validation.
4.1 Formulas for piecewise linear boundaries
If a piece of the boundary of a star domain is part of a line of the form then clearly the line through the origin and a point intersects this piece of boundary at
and hence
Of course, in the case where this piece of boundary is a vertical segment with equation then the above intersection is at
4.2 Formulae for piecewise elliptic boundaries
We suppose here that a piece of at least one of our boundary curves is part of a nontrivial ellipse with equation of the form where is our ventage point. Of course a line passing through and a different point intersects at two distinct points. For the construction, we need the point of that intersection which is nearer to in the sense that and have the same sign as and respectively.
We have
This implies
We can then apply formulae (6)-(14) to build any generalized cloaks involving boundaries of elliptic types, where the center of the ellipse is the ventage point .
Figure 2: (Color online) A cross-like (A), a sunflower-like (C) and a hexagram (D) all mimicking a circular cylinder (B) of small radius , that is in equation (17). The inner and outer boundaries of the petals are respectively parts of the ellipses and rotated by angle or in (A) or , or in (C). The hexagram (D) is generated from an equilateral triangle with side The energy corresponding to the plane matter wave incident from the top is , where is the wavelength of a transverse electromagnetic wave in the optics setting with the celerity of light in vacuum, normalized here to . To enhance the scattering, a flat mirror is located under each quantum cloak and obstacle.
This has been used in Figure 2 (A), Figure 2 (C), Figures 4 (E) and Figure 9 (B). We note that outside the cloaks in Figure 2 (A) and 2 (C), the scattered field is exactly the same as that of the small disc of radius . When the radius of the disc (that is, when in equation (17)) tends to zero, the cloaks become singular and the plane matter wave goes unperturbed (invisibility).
Figure 3: (Color online) Upper panel: Profile of backward matter wave along the -axis for for a plane wave incident from the top as in Fig. 2 for a cross-like (solid blue curve, see A), a sunflower-like (dashed red curve, see C), a hexagram (dotted yellow curve, see D) and a circular cylinder of small radius (solid black curve, see B); Lower panel: idem for profile of foreward matter wave for ; We note the large amplitude of the forward wave, due to the presence of a mirror below each nano scatterer at . The slight discrepancy between the curves is attributed to a numerical inaccuracy induced by the highly heterogeneous nature of the cloaks which is further enhanced by the irregularity of the boundary of the starshape cloak (dotted yellow curve, see D). The amplitude of the wave in both panels can be normalized to 1 (dividing throughout by and by in the upper and lower panels respectively) for comparison with Fig. 2.
Figure 4: (Color online) Left: A hollow parallelogram cylindrical region (A) scatters any incoming plane wave just like a much smaller solid cylinder (B) of the same nature. In (C) the same hollow parallelogram cylinder is designed to have the same response to waves as the small equilateral triangle (D) with side . Right, squaring the circle: metamaterials allow to make a circular cylindrical hollow region (inner and outer radii and respectively) and a small solid square cylinder (of side and having the same center) equivalent, as regards their signatures and the way waves see them. In all these cases, the coated regions not only gain the same electromagnetic signature as any desired other object, but also serve as cloaks with nonsingular material properties, in fact the presence of any types of defects hidden inside them has no effect in the way they scatter waves. The energy corresponding to the plane matter wave incident from the top is , where is the wavelength of a transverse electromagnetic wave in the optics setting with the celerity of light in vacuum, normalized here to .
4.3 Squaring the circle
In this section, we make a circle have the same (electromagnetic) signature as a virtual small square lying inside its enclosed region and sharing the same centre. In particular, its appearance to an observer will look like that of a square. As above, the transformation will map the region enclosed between the small square and the outer circle into the circular annulus bounded by the inner (cloaking surface) and outer circles, where the sides of the square are mapped to the inner circle and the outer circle stays fixed point wise. To do so, we again use the diagonals of the square to part those regions into sectors. Indeed, the diagonals provide a natural triangulation by dividing the region inclosed inside the square into four sectors. The natural continuation of such a triangulation gives the needed one. In each sector, we apply the same formulas as above, where are obtained from the small square as in Section 4.1 and for both and , we use the same formulas for ellipses in Section 4.2. For the numerical simulation, we use a small square of side and two circles radii , all centred at So in both the uppermost and lowermost sectors (see Figure 3 (E) ) formulas (15) for the square become whereas in the leftmost and rightmost sectors, we have For all sectors, formulas (17) now read
The inner boundary of the cloak corresponds to a potential wall (with transformed Neumann boundary conditions, which also hold for infinite conducting or rigid obstacles depending upon the physical context). We report these results in Figure 2 and Figure 4. Some Neumann boundary conditions are set on the ground plane, the inner boundary of the carpet and the rigid obstacle.
4.4 Star shaped cloaks
In this section, we call star shaped a region bounded by a star polygon, such as a pentagram, a hexagram, a decagram, …, as in [41]. Star shaped regions are particular cases of star regions. The design of star shaped cloaks requires an adapted triangulation of the corresponding region. That is, a triangulation that takes into account the singularities at the vertices of the boundary of the region. So that, each vertex belongs to the edge of some triangle. To the resulting triangles, one applies the same maps as in Section 3. See e.g. Figure 2 (D).
4.5 Finite Element Analysis of the cloak properties
We now turn to specific numerical examples in order to illustrate the efficiency and feasibility of the cloaks we design.
Comparison of backward and forward scattering of isomorphic cloaks
Let us start with a comparison of both backward and forward scattering for the cloaks shown in Figure 2 (D). We report in Figure 3 the amplitude of the matter wave above and below the scatter (cloak and/or obstacle) along the x-axis respectively for and . We note the slight discrepancy between the curves, which is a genuine numerical artefact: we have checked that the finer the mesh of the computational domain, the smaller the discrepancy (which is a good test for the convergence of the numerical package COMSOL Multiphysics). The mesh needs actually be further refined within the heterogeneous anisotropic cloak and the perfectly matched layers compared with the remaining part of the computational domain which is filled with isotropic homogeneous material. We note that the yellow curve (corresponding to the pentagram, see Figure 2 D) is most shifted with respect to the black curve (corresponding to the small obstacle on its own i.e. the benchmark, see Figure 2 B). This can be attributed to the irregular boundary of the cloak as analysed in the case of singular star-shaped cloaks in [41]: we considered around 70000 elements for the mesh in all four computational cases reported in Figure 3 in order to exemplify the numerical inaccuracies. For computations with 100000 elements, the yellow curve is shifted downwards and is nearly superimposed with the black curve. Moreover, the strong asymmetry of the yellow curve in the upper panel of Figure 3 vanishes for 100000 elements. We attribute this numerical effect to the artificial anisotropy induced by the triangular finite element mesh of the pentagram.
Analysis of the material parameters within a cloak
Another interesting point in this paper, is that the metamaterial in our model is non-singular, therefore enabling the implementation of potentially broadband cloaks over a wide range of wavelengths. Moreover, the cloaked objects display a mimesis phenomenon in that, they are designed to acquire any desired quantum or electromagnetic signature.
The result of the numerical exploration of the three eigenvalues of the material tensor of Figure 4 (E), is reported in Figure 5. First, using Maple software, we have purposedly represented those eigenvalues in a wider domain, including the cloaked region, despite the boundary conditions on the inner boundary of cloak. One clearly sees that all of the three eigenvalues take finite values away from even inside the cloak itself. The material tensor is hence nonsingular. Due to the fourfold symmetric geometry of the cloak (all sectors are obtained by a rotation of any fixed one), we only need to look at those eigenvalues in one sector. We note that in accordance with the fact that the cloak should display a strong anisotropy in the azimuthal direction in order to detour the wave. For a singular cloak tends to zero on the inner boundary of the cloak, while tends to infinity.
We also represent the finite element simulation of , which exemplifies the fourfold symmetry of the isovalues within the circular cloak, a fact reminiscent of the fourfold symmetry of the square which the cloak mimics.
Figure 5: (A)-(C) are illustration of the graphs of the three eigenvalues of the material tensor in the uppermost sector of Fig. 4 (E): (A) ; (B) ; (C) . We note that, each of all those three surfaces are strictly above the plane even inside the cloak itself. Because all other sectors of the cloak are obtained by a rotation of the uppermost one, it suffices to study the eigenvalues of in just one sector. Note that, the above surfaces were drawn using the MAPLE software. In (D) we represent the finite element computation of (see B) for all four sectors of Fig. 4 (E) in the COMSOL MULTIPHYSICS package. We note the four-fold symmetry of the isovalues.
5 Generalized mirage effect and almost trapped states
It is known that a point source located inside the coating of a singular cloak (i.e. a cloak such that in (6) leads to a mirage effect whereby it seems to radiate from a shifted location in accordance with this geometric transformation [23].
5.1 Shifted quantum dot inside the transformed space
This prompts the question as to whether a similar effect can be observed in non-singular cloaks i.e. when or are nonconstant. As it turns out, the physics is now much richer: we can see in Figure 6 that when the source lies inside the coating, it only seems to radiate from a shifted location in accordance with (6), but it is moreover in presence of a small object of sidelength . This can be seen as a generalized mirage effect which opens many new possibilities in optical illusions. Indeed, Nicolet et al. have proposed to extend the concept of mirage effect for point sources located within the heterogeneous anisotropic coating of invisibility cloaks to finite size bodies which scatter waves like bodies shaped by the geometric transform [24]. This is in essence an alternative path to our proposal for mimetism. However, this mirage effect can be further generalized to non-singular for which a finite size body located inside the coating will now create the optical illusion of being another body in presence of some obstacle, which bears some resemblance with Fata Morgana, a mirage which comprises several inverted (upside down) and erect (right side up) images that are stacked on top of one another. Such a mirage occurs because rays of light are bent when they pass through air layers of different temperature. This creates the optical illusion of levitating castles over seas or lakes, as reported by a number of Italian sailors, hence the name related to Morgana, a fairy central to the Arthurian legend able to make huge objects fly over her lake.
Figure 6: A non-singular cloak with two square boundaries of sidelengths and in presence of a quantum dot with energy (resp. an electric current line source of wavelength in the context of tranverse electric waves). (A-C) When the source is located a distance away from the cloak, it seems to emit as if it would be a distance away from a square obstacle of sidelength . (b-d) When the source is located a distance away from the inner boundary of the cloak (i.e. in the middle of the coating), it seems to emit as if it would be a distance away from a square obstacle of sidelength , in accordance with (6) where , and .
5.2 Field confinement on resonances: Anamorphism fall down
Another intriguing feature of singular cloaks is their potential for light confinement associated with almost trapped states which are eigenfields exponentially decreasing outside the invisibility region. Such modes were described in the context of quantum cloaks by Greenleaf, Kurylev, Lassas and Uhlmann in [20]. These researchers discovered that such modes are associated with energies for which the Dirichlet to Neunmann map is not defined i.e. on a discrete set of values. Here, we revisit their paper in light of non-singular quantum cloaks, that is when we consider the blowup of a small region instead of a point. Our findings reported in Figure 7 for a star shaped and a rabbit-like non-singular cloaks mimicking a small disc of radius bridge the quantum mechanical spectral problems (panels (a) and (b)) to the scattering problems (panels (c) and (d)) in the following way: we first look for eigenvalues (i.e. quantified energies ) and associated eigenfunctions of the equation Eq. (4) in the class of square integrable functions on the whole space (note that here, as the metric is non singular, there is no need to consider a weighted Sobolev space). Note however that the set continuity conditions on the inner boundary of the cloak, instead of Neumann ones. This provides us with a discrete set of complex eigenfrequencies, with a very small imaginary part (also known as leaky modes in the optical waveguide literature). We neglect this imaginary part and launch a plane wave on the non-singular cloak (whereby the invisibility region is also included within the computational domain as we once again set continuity conditions on the inner boundary of the cloak) at the very frequency given by the spectral problem, see panels (c) and (d). We clearly see that the inside of the cloak hosts a quasi-localized eigenstate whose energy is mostly confined inside a star (panel C) and a rabbit (panel D), both of which actually scatter like a small disc.
Figure 7: Left panel: Modulus of the fundamental eigenstates associated with quantified normalized energy (resp. a transverse electric plane wave of wavelength ) for a non-singular cloak shaped as a star (A) and a rabbit (B) both of which mimic a small disc (of normalized radius , e.g. nanometers) ; Right panel: Matter wave incident from the top on the quantum cloaks with a spatially varying potential with compact support (i.e. vanishing outside the cloak) and energy ; The large amplitude of the field within the cloak is noted.
6 Generalized super-scattering for negatively refracting non-singular cloaks
In this section, we take some freedom with the one-to-one feature of the previous transforms and allow for space folding. This means that and in (6) while the (resp. the ) i=1,2,3 all have the same sign, thus making and strictly negative real-valued functions. It has been known for a while that space folding allows for the design of perfect lenses, corners and checkerboards [42, 43, 44, 2]. But it is only recently that researchers foresaw the very high-potential of space folding as applied to the design of super-scatterers [45, 46, 47, 48, 49]. We generalize these concepts to mimetism via space folding.
The mapping leading to the super-scatterer is shown in figure 8. We would like to emphasize that here we get not only a magnification of the scattering cross section of an object, in a way similar to what optical space folding does for a cylindrical perfect lens, but we can also importantly change the shape of the object.
We illustrate our proposal with a numerical simulation for a square obstacle surrounded by an anti-cloak in Figure 9(a), which mimics a larger square obstacle, see Figure 9(b). We note that the large field amplitude on the anti-cloak’s upper boundary can be attributed to a surface field arrising from the physical parameters with opposite signs on the cloak outer boundary (an anisotropic mass density in the context of quantum mechanics and an anisotropic permittivity in the context of optics). It is illuminating here to draw some correspondence with electromagnetic waves, as the anisotropic mass density (resp. permittivity) indeed takes opposite values when we cross the outer boundary of the cloak, and this ensures the existence of a surface matter wave (resp. a surface plasmon polariton) clearly responsible for the large field amplitude (in the transverse electric wave polarization i.e. for a magnetic field parallel orthogonal to the computational plane. We further show an example of a small circular obstacle mimicking a large square obstacle in Figure 9(a) and 9(b). Once again, the large field amplitude on the outer cloak boundary comes from the complementary media inside and outside the cloak. We believe such types of mimetism might have tremendous applications in quantum dot probing, bringing the nano-world a step closer to metamaterials.
Figure 8: Construction of a generalized cloak with optical space folding for superscattering effect. The transformation with inverse (6) magnifies the region bounded by the two surfaces and into the region bounded by and . We note that the transform is no longer an isomorphism. The curvilinear metric inside the carpet (here, an orange flower) is described by the transformation matrix see (7)-(14). Any quantum object located within the region bounded by the surface scatters matter waves like a larger object bounded by (here, a yellow star).
Figure 9: (A-C)Any obstacle surrounded by a square anti-cloak with square boundaries of sidelengths and , scatters a plane matter wave of energy (resp. a transverse electric plane wave of wavelength in the context of optics) which is coming from the top like a larger square obstacle of sidelength . (b-d) Any obstacle surrounded by a circular anti-cloak with circular boundaries of radii and scatters a plane matter wave of energy (resp. a transverse electric plane wave of wavelength ) which is coming from the top like a larger square obstacle of sidelength . The large field amplitude on the upper boundary of the anti-cloak in (A) and (B) is noted. It can be attributed to some kind of surface matter wave (a surface plasmon polariton in the context of optics).
7 Conclusion
In this paper, we have proposed some models of generalized cloaks that create some illusion. We focussed here on the Schrödinger equation which is valid in a number of physical situations, such as matter waves in quantum optics. However, the results within this paper can easily be extended to the Helmholtz equation which governs the propagation of acoustic and electromagnetic waves at any frequency. One simply needs to insert the transformation matrix within the shear modulus and density of an elastic bulk (in the case of anti-plane shear waves), the density and compressional modulus of a fluid (in the case of pressure waves), or the permittivity and permeability of a medium (in the case of electromagnetic waves) [11, 12, 13, 14, 15, 16]. In this latter context, this means that in transverse electric polarization (whereby the magnetic field is parallel to the fiber axis), infinite conducting obstacles dressed with these cloaks display an electromagnetic response of other infinite conducting obstacles. In these cloaks, an electric wire could in fact be hiding a larger object near it. Actually, any object could mimic the signature of any other one. For instance, we design a cylindrical cloak so that a circular obstacle behaves like a square obstacle, thereby bringing about one of the oldest enigma of ancient time : squaring the circle! The ordinary singular cloaks then come as a particular case, whereby objects appear as an infinitely small infinite conducting object (of vanishing scattering cross-section) and hence become invisible. On the contrary, such generalized cloaks are described by non-singular permittivity and permeability, even at the cloak’s inner surface. We have proposed and discussed some interesting applications in the context of quantum mechanics such as probing nano-objects.
Obviously, one realizes that, when the inner virtual region responsible for mimetism tends to zero, one recovers the case in [1] where the material properties are no longer bounded, as one of the eigenvalues of the mass density matrix tends to zero whilst another one recedes to infinity, as we approach the inner boundary of the coated region, see also [21].
In this paper, we have played with optical illusions, trying to be as imaginative as possible in order to exhaust the possible geometric transforms we had at hand in Euclidean spaces (Non-Eulidean cloaking is a scope for more creative thinking [28]). It should be pointed out that while the emphasis of this paper was on quantum waves, corresponding non-singular cloaks in electromagnetism that have an inner boundary which is perfectly electric conducting (PEC) and scatter like a reshaped PEC object, were investigated in [54, 55, 56, 56, 57, 58, 59]. However these works focussed mostly on the reduction of the scattering cross section of a diffracting object, while the present paper explores the mimetism effect whereby an object scatters like another object of any other scattering cross section (and in particular reduced or enhanced ones).
Metamaterials [50] is a vast area with a variety of composites structured on the subwavelength scale in order to sculpt the electromagnetic wave trajectories, as experimentally demonstrated at microwave frequencies by a handful of research groups worldwide [51, 52, 53]. Resonant elements within metamaterials are in essence man-made atoms allowing to mimic virtually any electromagnetic response we wish, and this is turn allows us to push the frontiers of photonics towards previously unforeseen areas.
AD and SG acknowledge funding from EPSRC grant EPF/027125/1. We authors also wish to thank the anonymous referees for constructive critical comments.
1. Pendry J B, Shurig D and Smith D R 2006 ”Controlling electromagnetic fields,” Science 312 1780-1782.
2. Leonhardt U 2006 ”Optical conformal mapping,” Science 312, 1777-1780
3. Leonhardt U and Philbin T G 2006 ”General Relativity in Electrical Engineering,” New J. Phys. 8, 247
4. Veselago V G 1967 Usp. Fiz. Nauk 92 517-526
5. Veselago V G 1968 Sov. Phys.�Usp. 10 509-514
6. Pendry J B 2000 ”Negative refraction makes a perfect lens,” Phys. Rev. Lett. 86, 3966-3969.
7. Smith D R, Padilla W J, Vier V C, Nemat-Nasser S C and Schultz S 2000 Phys. Rev. Lett. 84, 4184-4187
8. Ramakrishna S A 2005 Rep. Prog. Phys. 68 449-521
9. Cheianov V V, Fal’ko V, Altshuler B L 2007 Science 315, 1252-1255
10. Moreno E, Fernández-Domínguez A I, Cirac J I, García-Vidal F J and Martín-Moreno L 2005 Phys. Rev. Lett. 95 1704061-1704064
11. Milton G W, Briane M and Willis J R 2006 New J. Phys. 8 248
12. Cummer S A and Schurig D 2007 New J. Phys. 9 45
13. Torrent D and Sanchez-Dehesa J 2008 New J. Phys. 10 023004
14. Cummer S A, Popa B I, Schurig D, Smith D R, Pendry J, Rahm M and Starr A 2008 Phys. Rev. Lett. 100 024301
15. Chen H and Chan C T 2007 Appl. Phys. Lett. 91 183518
16. Farhat M, Enoch S, Guenneau S and Movchan A B 2008 ”Broadband cylindrical acoustic cloak for linear surface waves in a fluid,” Phys. Rev. Lett. 101 134501
17. Brun M, Guenneau S and Movchan A B 2009 ”Achieving control of in-plane elastic waves,” Appl. Phys. Lett. 94 061903
18. Farhat M, Guenneau S, Enoch S and Movchan AB 2009 ”Cloaking bending waves propagating in thin plates,” Phys. Rev. B 79 033102
19. Zhang S, Genov D A, Sun C and Zhang X 2008 ”Cloaking of matter waves,” Phys. Rev. Lett. 100 123002
20. Greenleaf A, Kurylev Y, Lassas M, Uhlmann G 2008 New J. Phys. 10 115024
21. Kohn R V, Shen H, Vogelius M S and Weinstein M I 2008 ”Cloaking via change of variables in electric impedance tomography,” Inverse Problems 24 015016
22. Greenleaf A, Kurylev Y, Lassas M, Uhlmann G 2007 ”Full-wave invisibility of active devices at all frequencies,” Comm. Math. Phys. 275(3) 749-789
23. Zolla F, Guenneau S, Nicolet A and Pendry J B 2007 ”Electromagnetic analysis of cylindrical invisibility cloaks and mirage effect,” Opt. Lett. 32 1069-1071
24. Nicolet A, Zolla F, and Geuzaine C, ”Generalized Cloaking and Optical Polyjuice,” ArXiv:0909.0848v1.
25. Nicorovici N A, McPhedran R C and Milton G W 1994 ”Optical and dielectric properties of partially resonant composites,” Phys. Rev. B 49 8479-8482
26. Torres M, Adrados J P, Montero de Espinosa F R, Garcia-Pablos D, and Fayos J 2000 Phys. Rev. E 63 011204
27. Greenleaf A, Lassas M and Uhlmann G 2003 ”On nonuniqueness for Calderon’s inverse problem,” Math. Res. Lett. 10 685-693
28. Leonhardt U and Tyc T 2008, ”Broadband invisibility by an euclidean cloaking,” Science 323(5910), 110-112
29. Zhang P, Jin Y, and He S 2008 ”Obtaining a nonsingular two-dimensional cloak of complex shape from a perfect three-dimensional cloak,” Appl. Phys. Lett. 93, 243502-243504
30. Collins P and McGuirk J 2009 ”A novel methodology for deriving improved material parameter sets for simplified cylindrical cloaks,” J. Opt. A: Pure Appl. Opt. 11, 015104-015111
31. Liu R, Ji C, Mock J J, Chin J Y, Cui T J and Smith D R 2008 ”Broadband Ground-Plane Cloak,” Science 323 366-369
32. Li J and Pendry J B 2008 ”Hiding under the Carpet: A New Strategy for Cloaking,” Phys. Rev. Lett. 101 203901-4
33. Pendry J B and Li J 2008 New J. Phys. 10 115032
34. Norris A N 2008 Proc. Roy. Soc. Lond. A 464 2411-2434
35. Nicorovici N A P, McPhedran R C, Enoch S and Tayeb G 2008 ”Finite wavelength cloaking by plasmonic resonance,” New J. Phys. 10 115020
36. Milton G and Nicorovici NA 2006 ”On the cloaking effects associated with anomalous localized resonance,” Proc. Roy. Soc. Lond. A 462, 3027
37. Alu A and Engheta N 2005 ”Achieving Transparency with Plasmonic and Metamaterial Coatings,” Phys. Rev. E 95, 016623
38. Greenleaf A, Kurylev Y, Lassas M, Uhlmann G 2008 ”Electromagnetic wormholes via handlebody constructions,” Comm. Math. Phys. 281 (2) 369-385
39. Gabrielli L H, Cardenas J, Poitras C B and Lipson M 2009 ”Silicon nanostructure cloak operating at optical frequencies,” Nature Photonics 3 461-463
40. Diatta A, Guenneau S, Dupont G and Enoch S 2010 ”Broadband cloaking and mirages with flying carpets,” Opt. Express 18 11537-11551
41. Diatta A, Nicolet A, Guenneau S and Zolla F 2009 ”Tessellated and stellated invisibility,” Opt. Express 17 13389-13394
42. Pendry JB and Ramakrishna SA 2003 ”Focussing light with negative refractive index,” J. Phys.: Condens. Matter 15, 6345
43. Guenneau S, Vutha AC and Ramakrishna SA 2005 ”Negative refractive in checkerboards related by mirror-antisymmetry and 3-D corner reflectors,” New J. Phys. 7, 164
44. Milton GW, Nicorovici NAP, McPhedran RC, Cherednichenko K and Jacob Z 2008 ”Solutions in folded geometries, and associated cloaking due to anomalous resonance,” New J. Phys. 10, 115021
45. Zhang JJ, Luo Y, Chen SH, Huangfu J, Wu BI, Ran L and Jong JA 2009 ”Guiding waves through an invisible tunnel,” Opt. Express 17, 6203
46. Lai Y, Chen H, Zhang ZQ and Chan CT 2009 ”Complementary Media Invisibility Cloak that Cloaks Objects at a Distance Outside the Cloaking Shell,” Phys. Rev. Lett. 102, 093901
47. Lai Y, Ng J, Chen HY, Han DZ, Xiao JJ, Zhang ZQ and Chan CT 2009 ”Illusion Optics: The Optical Transformation of an Object into Another Object,” Phys. Rev. Lett. 102, 253902
48. Ng J, Chen HY and Chan CT 2009 ”Metamaterial frequency-selective superabsorber,” Opt. Lett. 34, 644
49. Wee WH, Pendry JB 2010 ”Super phase array,” New J Phys. 12 033047
50. Zheludev NI 2010 Science 328, 582
51. Schurig D, Mock J J, Justice B J, Cummer S A, Pendry J B, Starr A F, Smith D R 2006 ”Metamaterial electromagnetic cloak at microwave frequencies,” Science 314 977-980
52. Kante B, Germain D, de Lustrac A 2009 ”Experimental demonstration of a nonmagnetic metamaterial cloak at microwave frequencies,” Phys. Rev. B 80 201104
53. Tretyakov S, Alitalo P, Luukkonen O, Simovski C 2009 ”Broadband electromagnetic cloaking of long cylindrical objects,” Phys. Rev. Lett. 103 103905
54. Cummer SA, Rupoeng L and Cui TJ 2009 ”A rigorous and nonsingular two dimensional cloaking coordinate transformation,” Jour. Appl. Phys. 105, 056102
55. Hu J, Zhou X and Hu G 2009 ”Nonsingular two dimensional cloak of arbitrary shape,” Appl. Phys. Lett 95, 011107
56. Jiang WX, Cui TJ, Yang XM, Cheng Q, Liu R and Smith DR 2008 ”Invisibility cloak without singularity,” Appl. Phys. Lett. 93, 194102
57. Chen H, Zhang X, Luo X, Ma H and Chan CT 2008 ”Reshaping the perfect electrical conductor cylinder arbitrarily,” New J. Phys. 10, 113016
58. Li C, Yao K and Li F 2008 ”Two-dimensional electromagnetic cloaks with non-conformal inner and outer boundaries,” Opt. Express 16(23), 19366
59. Jiang WX, Ma HF, Cheng Q and Cui TJ 2010 ”A class of line transformed cloaks with easily realizable constitutive parameters,” Jour. Appl. Phys. 107, 034911
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test description |
2d24c53e0619c636 | Deterministic system
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system.[1] A deterministic model will thus always produce the same output from a given starting condition or initial state.[2]
In physics[edit]
Physical laws that are described by differential equations represent deterministic systems, even though the state of the system at a given point in time may be difficult to describe explicitly.
In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic.
In mathematics[edit]
The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions.[3] This sensitivity to initial conditions can be measured with Lyapunov exponents.
Markov chains and other random walks are not deterministic systems, because their development depends on random choices.
In computer science[edit]
A deterministic model of computation, for example a deterministic Turing machine, is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state.
A deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. There may be non-deterministic algorithms that run on a deterministic machine, for example, an algorithm that relies on random choices. Generally, for such random choices, one uses a pseudorandom number generator, but one may also use some external physical process, such as the last digits of the time given by the computer clock.
A pseudorandom number generator is a deterministic algorithm, that is designed to produce sequences of numbers that behave as random sequences. A hardware random number generator, however, may be non-deterministic.
In economics, the Ramsey–Cass–Koopmans model is deterministic. The stochastic equivalent is known as Real Business Cycle theory.
See also[edit]
1. ^ deterministic system - definition at The Internet Encyclopedia of Science
2. ^ Dynamical systems at Scholarpedia
3. ^ Boeing, G. (2016). "Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction". Systems. 4 (4): 37. arXiv:1608.04416. doi:10.3390/systems4040037. Retrieved 2016-12-02. |
a8cc37445acf8b86 | M.Sc. Physics - Kashmir Student | 24 Hour Education Service - JKBOSE, Kashmir University, IUST, BGSBU
Saturday, January 1, 2011
M.Sc. Physics
M.Sc Physics Entrance Syllabus for admission to P.G.Programme Physics – 2013.
Classical Mechanics: Review of laws of Motion, Components of Velocity and Acceleration in Cartesian Coordinates, Spherical Polar and Cylindrical Coordinates, Inertial & Non-inertial Frames, Uniformly Rotating Frame, Centripetal Acceleration, Coriolis Force and its applications.
Special Theory of Relativity: Reference Systems, Inertial Frames, Galilean Transformation, Conservation Laws, Propagation of light, Michelson-Morley Experiment, Search for ether. Postulates of Special Theory of Relativity, Lorentz Transformation, Length Contraction, Time Dilation, Velocity Addition Theorem, Variation of Mass with Velocity, Mass-Energy Equivalence, Particles with zero rest mass.
Motion in a Central Force Field: Kepler’s Laws and their derivation, Gravitational Law and Field, Potential due to a Spherical Shell, Sphere and Disc.
Systems of Particles: Centre of Mass, Equations of Motion, Conservation of Linear and Angular Momentum, Conservation of Energy, Principle of Rockets and its equation.
Rigid Body Motion: Rotational Motion, Moments of Inertia and their products (Angular Momentum, Torque) Principle Moments and Axes, Euler’s equations.
Elasticity and Small Deformations: Hooke’s Law, Elastic Constants for an Isotropic Solid, Beams supported at both the ends, Cantilever, Torsion of a cylinder, Bending Moments and Shearing Forces.
Kinematics of Moving Fluids : Equations of Continuity (Differential form), Euler’s equations, Bernoulli’s theorem, Viscous Fluids, Streamline and Turbulent Flow, Reynold’s number, Poiseuelle’s equation and its derivation.
Harmonic Oscillations: Differential equation and its solutions, Kinetic and Potential Energy, examples of Simple Harmonic Oscillations, Spring and Mass System, Simple and Compound Pendulum, LC circuit, Oscillations of two Masses connected by a Spring.
Superposition: Superposition of two Mutually Perpendicular Simple Harmonic Vibrations of the Same Frequency, Lissajous figures, Case of Different Frequencies.
ForcedOscillations: Damped Harmonic Oscillator, Power Dissipation, Quality Factor, Driven Harmonic Oscillator, Transient and Steady States.
Vector Calculus: Scalar and Vector Fields, Triple Vector Product, Gradient of a Scalar Field and its geometrical interpretation, Divergence and Curl of Vector Field, Line, Surface and Volume
integrals, Physical interpretation of Curl and Divergence, Gauss’s Divergence Theorem, Green’s and Stoke’s Theorem.
Integral Calculus: Repeated Integrals of a Function of more than one Variables, Definition of a Double and Triple integral, Evaluation of Double and Triple Integrals as Repeated Integrals.
Electrostatics: Multipole Expansion of E for Distribution of Charge at Rest, Dipole and Quadrupole Fields, Electrostatic Field Energy, Force per unit area on the surface of a conductor in an electric field, Point Charge in front of a Grounded Plane Infinite Conductor.
Dielectrics: Parallel Plate Capacitor with a Dielectric, Dielectric Constant, Polarization and Polarization Vector P, Displacement Vector D, Relation Between E, P & D, Boundary Conditions Satisfied by E and D at the Interface Between Two Homogenous Dielectrics, Illustration Through Simple Examples.
Current Electricity: Steady Current, Current Density J, Non-Steady Currents and Continuity Equation, Rise and Decay of Current in LR and RC circuits, Decay Constants, Transients in LCR Circuits, AC circuits, Complex Numbers and their applications in solving AC circuit problems, Complex Impedance and Reactance, Series and Parallel Resonance, Q factor, Power Consumed by an AC circuit, Power Factor.
Magnetostatics: Multipole Expansion of B, Magnetic Dipole Moment, Biot and Savart’s law, Ampere’s Circuital Law .B = 0, x B = 0 J, Magnetization Current, Magnetization Vector, H Field (magnetizing field), Calculation of H in Simple Geometrical Situations (Hystersis Loop, Rowland ring) Susceptibility and Magnetic Permeability (linear cases).
Electromagnetic Theory: Faraday’s Laws, Integral and Differential Forms, Energy in a Static Magnetic Field, Maxwell’s Displacement Current, Maxwell’s Equations, Electromagnetic Field Energy Density. The wave equation satisfied by E and B, Plane Electromagnetic Waves in Vacuum, Poynting Vector and Theorem, Reflection
and Refraction at a Plane Boundary of Dielectrics.
Basic concepts in Kinetic Theory of Matter: degrees of freedom, equipartition of energy, specific heat of monatomic gas, extension to di- and tri -atomic gases, behaviour at low temperatures, adiabatic expansion of an ideal gas, application to atmospheric physics.
Transport phenomena in gases : molecular collisions, mean free path and collision cross section, Estimates of molecular diameter and mean free path, transport of mass, momentum and energy and inter-relationship, dependence on temperature and pressure. Vander Waals gas: equation of states, nature of Vander Waals forces, comparison with experimental P-V curves, the critical constants, Joules expansion of ideal gas and of a Vander Waals gas, Joule coefficient, estimates of J-T cooling.
Liquefaction of gases: Boyles temperature and inversion temperature, principle of regenerative cooling and of cascade cooling, liquification of hydrogen and helium, refrigeration cycles, meaning of efficiency.
Review of laws of thermodynamics: Zeroth, Ist. & 2nd laws, concept of thermal equilibrium, internal energy, Carnot theorem. Entropy, Principle of increase of entropy, the thermodynamic scale of temperature, its identity with the perfect gas scale, impossibility of attaining the absolute zero,
third law of thermodynamics.
Thermodynamic relationships: Thermodynamic variables, extensive and intensive, Maxwell’s general relationship. Clausius-Clapeyron heat equation, thermodynamic potentials and equilibrium of thermodynamical systems, relation with thermodynamical variables. Cooling due to adiabatic demagnetization.
The Statistical basis of thermodynamics: probability and thermodynamic probability, principle of equal a priori probability, probability distribution and its narrowing with increase in number of particles. The expressions for average properties, constraints , accessible and inaccessible states, distribution of particles with a given total energy into discrete set of energy states, microstates and macrostates.
Probability and entropy: Boltzmann entropy relation, Statistical interpretation of the second law of thermodynamics, Boltzmann Canonical distribution law and its application, the rigorous form of equipartition of energy.
Maxwellian distribution of speeds in an ideal gas: distribution of speed and of velocities, experimental verification, distinction between mean, rms and most probable speed values.
Waves in media: speed of transverse waves on a uniform string, speed of longitudinal waves in a fluid, energy density and energy transmission in waves, Waves over liquid surface, concept of gravity waves and ripples. Group velocity and phase velocity.
Standing waves: standing waves as normal modes of bounded systems, examples, Production and detection of ultrasonic waves and their applications.
Acoustics: Limits of human audibility, intensity and loudness, bel & decibel, the musical scale, transducers and their characteristics (microphone, piezoelectric system), Reverberation, Sabine’s formula.
General theory of image formation: cardinal-points of an optical system, general relationships, thick lens formula and lens combinations. Lagrange equation of magnification.
Aberration in images: chromatic aberrations, achromatic combination of lenses in contact and separated lenses. monochromatic aberrations and their reductions. aspherical mirrors and Schmidt corrector plates, aplanatic points. Common types of eye pieces: Huygens and Ramsden eye pieces.
Interference of light: the principle of superposition, two-slit interference, coherence requirement for the sources, optical path retardation. lateral shift of fringes, colours of thin films.
Interferometry: Michelson interferometer, its application for precision determination of wavelength, wavelength difference and width of spectral lines. multiple beam interference, Fabry-Perot interferometer and etalon.
Frensel diffraction: Frensel half-period zones, zone Plate, straight edge, rectilinear propagation.
Fraunhofer diffraction: diffraction at a slit, the intensity distribution .Diffraction at a circular aperture resolution of images, Rayleigh criterion, resolving power of telescopic and microscopic systems.
Diffraction gratings: diffraction at N parallel slits, intensity distribution at an N parallel slits.Plane diffraction grating, resolving power of a grating.
Polarization: Polarization by refraction, Mall’s law, Refraction in Uniaxial crystals. Rotation of plane of polarization, origin of optical rotation in liquids and in crystals.
Laser Systems: Purity of spectral line, coherence length and coherence time,
spatial coherence of a source, Einstein A and B coefficients, Spontaneous and induced emission, conditions of laser action, population inversion simple application laser.
Unit X
Origin of quantum theory: Black body radiation, failure of classical physics to explain (a) UV catastrophe (b) photoelectric effect, Planck’s radiation law.
Wave- particle duality: de-Broglie’s hypothesis of matter wave, the concept of wave packets and group velocities, evidence for diffraction and interference of particles, Davison-Germer Experiment, Heisenberg’s uncertainty relation for p and x, its extension to energy and time, consequences of uncertainty relation to particle in a box.
Schrödinger equation: postulates of quantum mechanics, operators, expectation values, transition probabilities, applications to particle in a one and three dimensional box.
Hydrogen atom, natural occurrence of n, l and m quantum numbers, related physical quantities, comparison with Bohr’s theory.
Unit XI
Atomic Physics: associated magnetic moment, Spin –orbit coupling, quantum number j, spatial quantization, Stern-Gerlach experiment, Pauli’s exclusion principle. Spectra of hydrogen and sodium, spectral terms, doublet fine structure, screening constants for sodium for s, p, d, f states, selection rules. Singlet and triplet fine structure in alkaline earth spectra, LS coupling and J-J coupling., weak and strong field Zeeman effects, Lande g factor, X-ray spectra, Continuous X-ray spectrum, characteristic X-ray, Moseley’s law, X-ray absorption spectra.
Discrete set of electronic energies of molecules: Quantization of vibrational and rotational energies, determination of inter-nuclear distances, pure rotational and rotation-vibration spectra, Dissociation limit for the ground and other electronic states, transition rules for pure vibration and electronic-vibration spectra.
Raman effect: Stokes and anti-Stokes lines, complimentary character of Raman and infrared spectra, experimental arrangements for Raman Spectroscopy.
Unit XII
Structure of Nuclei: Basic properties, (angular momentum, magnetic momentum, quadrupole
moment and binding energy). Deuteron Problem, concept of nuclear forces Beta decay, Parity violation. Range of alpha particles, Gieger-Nuttel law, Gamow’s explanation of alpha decay.
Nuclear Models: shell model, liquid drop model. compound nucleus, fission and fusion, energy production in stars by p-p and carbon nitrogen cycles.
Interaction of particles and detectors: interaction of charged particles and neutrons with matter, working of nuclear detectors, G. M. Counter, Proportional Counter, Cloud Chambers, Spark Chambers, Emulsions.
Crystal Structure: periodicity, lattice and bases, fundamental translation vectors, unit cell, Wigner-Seitz cell. Laue’s theory of X -ray diffraction, Laue’s theory and Bragg’s law, Laue patterns. allowed rotations, lattice types, lattice planes, Reciprocal lattice.
Bonding: Potential between a pair of atoms, Lennard-Jones potential, concept of cohesive energy, covalent, Vander Waals, ionic and metallic crystals.
Magnetic Properties: Atomic magnetic moment, magnetic susceptibility, Langevin theory of diamagnetism and paramagnetism, ferromagnetism, ferromagnetic domains.
Thermal Properties: Lattice vibrations, vibrations of one dimensional monatomic chain under harmonic and nearest neighbour interaction approximation, concept of phonons, density of modes(1-D),Debye model,lattice specific heat, low temperature limit.
Motion of electrons: quantum mechanical free electron theory. fermi energy, Fermi velocity, Fermi sphere, conductivity and Ohm’s law (explanation on the basis of displacement of fermi sphere).
Unit XIV
Band Structure: electrons in periodic potential, Kronig- Penney model, concept of brillouin zones and explanation to energy bands, energy gap, metals, insulators,and semiconductors.
Semiconductors: intrinsic semiconductors, electrons and holes, Fermi level. temperature dependence of electron and hole concentration. Doping, impurity states, n and p- type semiconductors, conductivity, mobility, Hall effect, Hall coefficient.
Semiconductor device: metal –semiconductor junction, p-n junction, energy level diagrams, majority and minority carriers, Tunnel diode, Light emitting diode(LED),solar cell.
Field effect Transisitor: JFET volt-ampere characteristics, biasing JFET, ac operation of JFET,FET as variable voltage resistor.
MOSFET: Depletion and Enhancement mode MOSFET, biasing MOSFET, digital MOSFET circuits (NAND & NOT gates)
Unit XV
Power supply: Diode as a circuit element ,load line concept, rectification, ripple factor.Zener diode, voltage stabilization. Electronic voltage regulation, characteristics of a transistor in CB, CE, CC mode, graphical analysis of the CE configuration ,Low frequency equivalent circuits, h-parameters, transistor biasing techniques(Voltage divider), bias stability, thermal runaway.
Small signal Amplifiers: General principles of operation, classification, distortion, RC coupled amplifier, gain-frequency response, derivation of gain, input and output impedance at mid-frequencies. Transformer coupled amplifiers, expression of gain at mid-frequency. Emitter follower, determination of gain at low frequencies. Common source JFET amplifier with expression for voltage gain at mid-frequency. |
7b089162a364668f | Bad science Medicine Physics Quackery Skepticism/critical thinking
Luminas Pain Relief Patches: Where the words “quantum” and “energy” really mean “magic”
Orac discovers the Luminas Pain Relief Patch. He is amused at how how quacks confuse the words “quantum” and “energy” with magic.
Luminas Pain Relief Patches Luminas Pain Relief Patches: They cure everything through…energy (wait, no, magic).
Energy. Quacks keep using that word. I do not think it means what they think it means. Certainly Luminas doesn’t. Yes, I know that I use a lot of variations on that famous quote from The Princess Bride all the time, probably more frequently than I should and likely to the point of annoying some of my readers, but, damn, if it isn’t a nearly all-purpose phrase to use to riff on various quackery.
Also, if there’s one concept that quacks love to abuse, it’s energy. Whether it’s “energy healing” like reiki, where practitioners claim to be able to channel healing energy from the magical mystical “universal source” specifically into their patient to specifically heal whatever ails them, even if it’s from a distance or you’re a dog, or “healing touch,” where practitioners claim to be able to manipulate their patients’ “life energy” fields, again to healing effect, so much quackery is based on a misunderstanding of “energy” as basically magic. So it is with some spectacularly hilarious woo that I came across last week and, given that it’s Friday, decided to feature as a sort of Friday Dose of Woo Lite. It even abuses quantum theory because of course it does. So much quackery does.
So what are we talking about here? What is Luminas? To be honest, more than anything else, it reminds me of the silly “Body Vibesenergy stickers that Gwyneth Paltrow and Goop were selling last year (and probably still are) that claim to “rebalance the energy frequency in our bodies,” whatever that means. So let’s look at the claims.
Right on the front page of the Luminas website, you’ll find a video. It’s well-produced, as many such videos for quackery are, and it blathers on about how the product being advertised takes advantage of “revolutions in quantum physics,” as a lot of quackery does. Let’s see how this lovely patch supposedly works.
The basic claim is that the Luminas patch is charged with the “energetic signatures of natural remedies known for centuries to reduce inflammation.” These natural remedies include “Acetyl-L-Carnitine, Amino Acids, Arnica, Astaxanthin, B-Complex, Berberis Vulgaris, Bioperine, Boluoke, Boswellia, Bromelain, Chamomile, Chinchona, Chondroitin, Clove, Colostrum, CoQ10, Cordyceps, Curcumin, Flower Essences Frankincense, Ginger, Ginseng, Glucosamine, Glutathione, Guggulu, Hops Extract, K2, Lavender, Magnesium, Motherwort, MSM, Olive Leaf, Omega-3, Peony, Proteolytic Enzymes, Polyphenols, Rosemary Extract, Telomerase Activators, Turmeric, Vinpocetine, Vitamin D, White Willow Bark and over 200 more!”
Luminas Pain Relief Patches
Luminas Pain Relief Patches: here’s the excuse to show partially naked bodies.
Don’t believe me? Take a look at this video on this page! It starts out with an announcer opining about how “energy is all around us.” (Well, yes it is, but that doesn’t mean your nonsense product works.) The announcer then goes on about Luminas somehow infuses its patches with the energy from she substances above:
…energy that your body inherently knows how to absorb and use with absolutely no side effects.
What? Not even skin irritation from the patch or any of the adhesive used to stick the patch to your body? I find that hard to believe. I mean, even paper tape can cause irritation! Fear not, though! The announcer continues:
Through the use of quantum physics scientists and doctors now have the ability to store the energetic signatures of hundreds of pain- and inflammation-relieving remedies on a single patch. Once applied, your body induces the flow of energy from the patch, choosing which electrons it needs to reduce inflammation. Science, relieving pain, with the power of nature.
So. Many. Questions. How, for instance, do the Luminas “scientists” store these “energetic signatures” on a patch? (More on that later.) What, exactly, is an “energetic signature”? How does the body know which electrons it needs to reduce inflammation and pain? As a surgeon and scientists with a PhD in cellular physiology, I’d love to know the physiologic mechanism by which the body can distinguish one electron from another, given that there really is no known biological (or, come to think of it, no physical) mechanism for that to happen and if Luminas has discovered one its scientists should be nominated for the Nobel Prize.
Let’s get back to a key question, though: How on earth is all this energy goodness concentrated into a little patch roughly the size of a playing card? Physicists and chemists are going to guffaw at the answer, I promise you. First, the same page linked to above also notes that the “patches contain no active ingredients” because they “are charged with electrons captured from” the substances listed above. So is this some form of homeopathy? Of course not! Look at the video, which shows magical energy swirling off of the natural remedies and winding its way into the patch! There’s your energy, you unbeliever, you! How can you possibly question it?
But, hey, the makers of Luminas know that there are science geeks out there; so for the benefit of them included in the FAQ is an explanation of just how much natural product-infused electrony goodness you can expect in a single patch:
For the geeks and scientists among us: Each patch contains 5.2 x 10^19 molecular structures, each with 2 oxygen polar bonding areas capable of holding a targeted, host electron, creating a total possible charging capacity equal to 10.4 x 10^19 host electrons. After considering the average transmission field voltage of humans (200 micro volts) we can calculate the relative capacity, per square inch of patch, at 333 Pico Farads.
So basically, they’re saying that each patch contains around 86 micromoles of…whatever…and that that whatever can bind…electrons, I guess. Somewhere, far back in the recesses of my mind and buried in the mists of time from decades ago, my knowledge from my undergraduate chemistry degree and the additional advanced physics courses stirred—and then screamed! I can’t wait to see what actual physicists and chemists whose knowledge is in active use think of this. I apologize in advance if I cause them too much pain by showing them this. Not everyone’s neurons are as resistant as mine to apoptosis caused by waves of burning stupid. It is a resistance built up over 14 years of examining claims like those of Luminas.
Who, I wondered, developed this amazing product? In the first video, we discover that it is a woman named Sonia Broglin, who is the director of product development at Luminas. Naturally, she’s featured with a monitor in the background showing what look like infrared heat images of people. I actually laughed out loud as the video went on, because it shows her in very obviously posed and scripted interactions with patients with no shirts on and up to several of these patches all over their torso and arms. Me being me, I had to Google her, and guess what I found? Surprise! Surprise! She’s listed as a certified EnergyTouch® practitioner who graduated from the EnergyTouch® School of Advanced Healing. What, you might ask, is EnergyTouch®? This:
Energy Touch® is an off-the-body multidimensional healing process that allows the Energy Touch® Practitioner to access outer levels of the human energy field. It is based on the understanding that the human energy field is a dynamic system of powerful influences, in unique relationship to physical, emotional, and spiritual wellbeing. This system consists of the field (aura), chakras (energy centers) and the energy of the organs and systems of the body.
We readily accept the many ways that our body functions and is powered by energy. Our heart beats using energy pulses. Our brain and nervous system communicates with our entire body through complex energetic pathways. Our human energy field is constantly reacting in response to the physical and emotional and spiritual needs of our body.
EnergyTouch® is distinctive in the field of energy healing in that the work takes place in a more expanded energy field allowing the practitioner to work on a cellular level. Our work includes accessing an energetic hologram of the physical body, which is a unique and vital aspect of EnergyTouch® Healing. This energetic hologram acts as a matrix connecting the energies of the outer levels of the field precisely with the physical body on a cellular level.
EnergyTouch® practitioners are skillfully capable of moving fluently throughout the levels of the human energy field, to access and utilize outer level energies to clear blocks and restore function at the most basic cellular level.
It’s all starting to make sense now. That is some Grade-A+, seriously energy woo there, and I’m guessing Broglin cranked it up to 11 when developing the Luminas patches.
Next up is someone named Dr. Craig Davies, who is billed as “Pro Sports Doctor.” Yes, but a doctor of what? It didn’t take much Googling to figure out that Davies is not a physician. He is a chiropractor, because of course he is. He ha actually worked on the PGA tour, apparently adjusting the spines of professional golfers.
Then there’s Dr. Ara Suppiah. Unlike Davies, Dr. Suppiah appears to be more legit:
He is a practicing ER physician, Chief Wellness Officer for Emergency Physicians of Florida and an assistant professor at the University of Central Florida Medical School. He also is the personal physician for several top PGA Tour professionals, including Henrik Stenson, Justin Rose, Gary Woodland, Graeme McDowell, Ian Poulter, Steve Stricker, Hunter Mahan, Jimmy Walker, Vijay Singh, Graham DeLaet, and Kevin Chappell, as well as LPGA Tour players Anna Nordqvist and Julieta Granada.
However, his Twitter bio describes him as doing “functional sports medicine,” which suggests to me functional medicine, which is not exactly science-based. Basically, Dr. Suppiah looks like an ER doc turned sports medicine doc who was a bit into woo but has dived both feet first into the deep end of energy medicine pseudoscience by endorsing these Luminas patches. Seriously, a physician should really know better, but clearly Dr. Suppiah doesn’t. Either that, or the money was good.
Ditto Dr. Ashley Anderson, a nurse practitioner who also gives an endorsement. She’s affiliated with Athena Health and Wellness, a practice that mixes standard women’s health treatments with “integrative medicine” quackery like acupuncture, reflexology, traditional Chinese medicine, and the like.
Given the claims being made, you’d think that Luminas would have some…oh, you know…actual scientific evidence to support its patch. The video touts “astounding results” from Luminas’ patient trials, but what are those trials? Certainly they are not published anywhere that I could find in the peer-reviewed literature. Certainly I could find no registered clinical trials in What I did find on the Luminas website is a hilariously inept trial in which patients were imaged using thermography (which, by the way, is generally quackery when used by alternative medicine practitioners).
Luminas Pain Control Patches
8/Luminas4.jpg”> Luminas Pain Control Patches: Wait! Don’t you believe our patient studies that are totally not clinical trials? Come on! It’s science, man![/caption]So. Many. Questions. About. This. Trial. For instance,
So. Many. Questions. About. This. Trial. For instance, was there a randomized controlled trial of the Luminas patch versus an identical patch that wasn’t infused with the magic electrony goodness of the Luminas patch? (My guess: No.) I also know from my previous studies that thermography is very dependent on maintaining standardized conditions and a rigorously controlled room temperature, as well as on using rigorously standardized protocols. Did Luminas do that? It sure doesn’t look like it. It looks as though Broglin just did thermography on people, slapped a patch on them, and then repeated the thermography. Of course, such shoddy methodology guarantees a positive result, at least with patients whose patch is applied to an area covered by clothing. The temperature of that skin can start out warmer and then cool over time after the clothing is taken off, regardless of whether a patch is applied or not. Did Broglin do any no-patch control runs, to make sure to correct for this phenomenon? Color me a crotchety old skeptic, but my guess is: Almost assuredly not. No, scratch that. There’s no way on earth it even occurred to these quacks to run such a basic control. They can, of course, prove me wrong by sending me their detailed experimental protocol to read.
I suspect I will wait a long time. After all,
After nearly 14 years of regular blogging and 20 years of examining questionable claims, it never ceases to amaze me that products like Luminas patches are still sold. Basically, it’s a variety of quantum quackery in which “energy” is basically magic that can do anything, and quantum is an invocation of the high priests of quackery.
By Orac
To contact Orac: [email protected]
99 replies on “Luminas Pain Relief Patches: Where the words “quantum” and “energy” really mean “magic””
By my reckoning, 10.4 x 10^19 electrons is 16.7 coulombs, To store 16.7 coulombs on 333 picofrads you need to charge it to 50 billion volts. Leyden would be impressed. Now I’m not quite clear on the claim, since the description says electrons per patch and the capacitance is per square inch.
But they are special electrons, like the marshmallow bits in Lucky Charms, so they probably don’t abide by the usual rules.
With all that charge, opening the package ought to result in the patches flying out like one of those spring snake gags.
You can figure out the area of the patches from their measurements. The large patches are 2.75″ x 4.0″ and the medium patches are 1.5″ x 2.75″. Just sayin’. ?
But the description says so many molecular structures per patch, not per unit area, then in practically the same breath talks about capacitance per square inch, hence my confusion: “Each patch contains 5.2 x 10^19 molecular structures,” I might be induced to think the numbers are total fabrications. But surely not!
Anyway, it makes little difference if the voltage is 50 billion or 50 million.
What cracks me up is that someone knew enough physics to come up with those numbers but not enough to know why they are ridiculous.
If they put Elvis’s mojo, or even Mojo’s mojo, into an energy patch, I might buy it.
“If you don’t have Mojo Nixon, then your patch could use some fixin’!”
I don’t know where Skid Roper has gone, but Mojo seems to have hooked up with Jello Biafra on at least one occasion. The “Love Me, I’m a Liberal” is somewhat amusing.
“Capacitance.” Get it together, Random Capitalization People.
But SERIOUSLY, it has the energetic signatures of all of those herbs, spices and nutraceuticals!
-btw- more legit ( semi-legit?) pain patches and liquids contain cayenne, menthol or lidocaine:
early on, in my continuing leg injury adventure, I used a liquid form of lidocaine which seemed to be helping HOWEVER at one point, it felt as though my leg were on fire and washing it off didn’t help.
Eventually, it wore off and I felt better but swore off Demon Lidocaine.
Fortunately, I am better enough that I don’t try these products but I can see how people rely upon them when they have pain.
Perhaps this is the doctrine of signatures updated for modern times.
I don’t know how you’d get lidocaine to penetrate intact skin. Perhaps it will if it is dissolved in dimethyl sulfoxide (DMSO). I think iontophoresis will work with lido.
@ doug:
I looked at the meds: they are OTC – standard drugstore stuff and one is 4% lidocaine/ another product has that plus 1% menthol. It did help against muscle/ nerve pain HOWEVER I had a bad reaction so I don’t use it.
Cripes, the first time I had my submandibular cyst biopsied (they eventually resorted to the aspiration gun), all I got was the cold spray, which is absolute crap as an analgesic.
Yes, it has the “signature” of a bunch of placebos, which makes it…
…a more convenient placebo!
The only thing that would be even more convenient would be placebos you can download from an app. Oh wait a minute. Haven’t we read, in this very column, about some “energy medicine” quack offering their own extra-special photons in an app?
This one sells electrons, that one sells photons, the only thing that hasn’t been tried yet is to sell neutrinos. Someone needs to put up a “surprise!” website offering “health-enhancing neutrinos,” and while they’re at it, “selected quarks and gluons.”
“We take out the strange quarks and leave only the charmed quarks, so you can have a charmed life!
Hmm, if only my close friend & coworker who does websites, had this type of sense of humor, I’d love to try it.
When you click the “Buy” button, you get a message about quantum quackery and a caution to not waste money on dreck.
BTW, if we proliferated those kinds of “surprise!” websites, they’d screw up the signal-to-noise ratio for the quantum quacks and other quacks, so badly that the quacks might suffer a loss of business, purely by way of losing placement in search engines. Anyone here up for a bit of guerrilla media?
As a chemist, I am amused by people who think that 10^19 is a large number. Or that it’s in any way impressive or unusual.
As for the oxygen polar bonding areas capable of holding a targeted host electron, I should put that on an exam to see if anyone can figure out that it just seems to be a florid description of an anion. You could say that lye (sodium hydroxide, aka Draino) contains the same: wouldn’t that make a great skin patch!:)
I think that the FDA should require that, if anyone wants to use the word “quantum”, or even “energy”, about a product, they should first be able to define it. That would do the trick. Even Deepak himself couldn’t pass that test.
Chopra especially couldn’t pass the test. He has had physicists try to explain it to him while he sits there with a blank look on his face so he knows he doesn’t understand it. That’s why he prefers quantum woo – you don’t have to understand anything and can just make $h!+ up as you go along confident that no acolyte or fellow woomeister will pull you up on it even though their own version is contradictory.
Chopra can actually be pretty good on comparative religion, so it’s doubly tragic that he goes down the quantum BS road. If he stuck to religion & philosophy, and stayed the hell away from the science he knows not, he could do some good.
Part of the blame for this rests with the media for giving his nonsense attention. Same as with Nobel laureates who’ve gone down various BS roads, such as Shockley and quack racial theories, etc. Same as with Silicon Valley big-wigs, look up “transhumanism” and “Alcor” and so on.
If we tried to educate reporters, it would be a constant game of whack-a-mole, and there would always be those who resist all efforts so they can keep pursuing cheap clickbait. But perhaps we can reach senior editors and publishers, at least in the major media such as newspapers of record, radio/TV networks, and so on?
Scientists could offer their grad students incentives to do the outreach. Postal mail to publishers, that leads off with “I’m writing on behalf of Dr. So-and-So (well known scientist) at Such-and Such University (major university)…” could work, because it’s leveraging name recognition, and postal mail gets through where email doesn’t. These letters and the replies could also be published to scientists’ blogs.
Thoughts? Ideas?
Yo Garnetstar, I’ll take your “energy challenge.”
Canonically, energy is the capacity to do work.
Work is somewhat circularly defined as conversion of energy from one form to another.
Mundane examples: a generator converts kinetic energy to electrical energy; and a motor converts electrical energy to kinetic energy. The same device can be used both ways, thus we get regenerative braking in electric and hybrid automobiles.
OK, so (excess capitalization intended for effect):
Energy is the capacity to do work. The special Energy embodied in our products, does its work by multiplying the Subtle Forces of your Bio-Energetic Field…”
Uh-ohski, looks like we’ll have to require them to define “force” (e.g. a measurable influence on the motion or other behavior of an object), and “field” (an area of spacetime in which a given force has a measurable effect e.g. a gravitational field around a star).
This could actually get fun.
Canonically, energy is the capacity to do work.
Which is why it’s a poor definition. There was a good post on this at the old SB, maybe Chad Orzel. Definitely not Ethan. I can’t remember whether there was another physics Scibling.
Fields exist throughout the entire universe and there are particle fields as well as force fields and, of course, the Higgs field.
I’d love to know the physiologic mechanism by which the body can distinguish one electron from another, given that there really is no known biological mechanism for that to happen
It’s even better than that: electrons are particles that by definition cannot be distinguished from one another. Each and every electron is fully identical to any other electron in a very fundamental way. All electrons have the exact same mass, charge and spin, and quantum physics also dictates that it is not possible to track the trajectories of individual electrons.
Absolutely, because it would completely overturn the amassed knowledge in the field of quantum physics from the past hundred years.
It’s even better than that: electrons are particles
The deuce you say. (Yes, I understand why one can’t walk through walls absent making a big mess).
So much so that John Wheeler wrote to Albert Einstein saying that he has figured out why all those electrons are identical. By using Richard Feynman’s idea that positrons are sort of like electrons traveling backwards in time, he concluded that there is only one electron in the universe (you test this for yourself using Feynman diagrams and it is indeed makes sense). Of course this was only an interesting idea, and no one really believes this is true.
correction…John Wheeler communicated this to Richard Feynman, not Albert Einstein.
We need a Wooday moment of silence in honor of Queen Elizabeth’s personal physician, “an international leader in homeopathic and holistic medicine”, who was killed on Wednesday when his bicycle was hit by a truck. On National Cycle to Work Day.
No word on whether he was treated in the Homepathic ER.
On National Cycle to Work Day.
Back when I was working for a university press, a fine young, long-waisted lady, year after year, would implore me to ride a bike to work. And I always pointed out that my apartment was a five-minute walk from work. She wanted me to rent one anyway. I’m somewhat hostile to the attempts by cyclists to try to Borg pedestrians, especially given that they represent a greater hazard than do cars.
I agree. Cycling is for leisure. In my neck of the woods, there are numerous trails for walking, cycling, and horse riding which were originally rail lines between the city and the outlying farmlands.
I used to cycle to school in college. It was about a 15 minute ride, and parking was definitely easier. I would do it again if I lived close enough to work, or if I worked on a campus large enough to make biking an easier way to get around. It’s good exercise. But that should be a choice, never a demand from others.
I do everything by bike. But hey, I’m Dutch. I even take the bike for distances that are a 5 minute walk.
Don’t like long distance biking and hate other cyclists, who think traffic regulations are not ment for them.
On the other hand, I rather get run over by a bike, than by a car.
I sometimes hate pedestrans as well, especially when they walk on the cycleway, instead of on the footpath next to it and let their small dog run free, while the cycleway is slippery, because of snow an ice.
Gotta be honest here. When my wife and I were in Amsterdam, we both thought that the bicyclists they were some of the biggest jerks we’d ever seen. Try as we might to stay in the footpath and obey the traffic signs and lights, we still had multiple near misses in just four days in the city.
I used to cycle to school as well and survived two near death experiences. I would definitely not use it for commuting these days. Motorists hate us even those of us who are polite. But I love long distance cycling. In fact I’m in training for my 7th consecutive 210km Around The Bay cycling event held in Melbourne each year in early October. And I do all my training on the rail trail that extends 44km into the countryside from where Ilive. I pity those who have to train in the city.
I presume that those in Chicago who bicycle on the sidewalk (which is prohibited if one is over 14 years of age) and wear helmets are doing the latter in case they get clocked. The next time I hear “on your right/left,” I’m moving in that direction.
Problem in the UK too. Some [email protected] in a track suit talking on a mobile phone while riding on the pavement. Makes me want to kick his wheels in. Not only is it illegal (But not really enforced) but bloody dangerous too. I always ride on the road. Unless there is a specific cycle path. Don’t get me started on running red lights. Grrrrgh.
Same. But mostly out of self interest. I never ride on footpaths – because cycling on the road is faster. And I always obey traffic lights – because motorists don’t see cyclists and their cars hurt when they hit you. I also wear a helmet and not only because it is legally required. I have been hit on the head several times and was grateful my head was covered by a helmet.
But some pedestrians are a bit of a worry as well, especially on shared trails where I cycle. Dogs are rarely under the control of their owners. Either the leash is way too long or the dogs actually disconnected from their leashes. Having walked my dog in the past before arthritis put an end to that (the dog, not me.Yet!), I sympathise. My solution is to slow down to a speed at which I am able to come to complete stop before hitting the dog as it inevitably walks directly into my line of travel. I also make a point of exchanging some pleasantry with its owner, hoping, I think in vain, that they will take better care of their mutt next time.
When I pass a pedestrian from behind. I have learnt that the only unambiguous call is “rider” – in a strong voice and at the the right distance. You approach them from the centre of the trail and sway to whichever side they don’t move to, because their choice is totally unpredictable, even if they are walking well to one side of the trail. When I approach from in front, I keep to my left (I live in Australia where we drive on the left) and hope that the pedestrian will sensibly move to their left as well. This is usually the case but also not guaranteed. The occasional pedestrian already walking on the left side of the trail will inexplicably move to the right side of the trail despite putting themselves into my direct line of travel.
in a strong voice and at the the right distance
Just bear in mind that not everyone can hear. It’s too long a story for me to recount in my current state of exhaustion, but quite a while ago, I basically wound up with a partial lateral meniscectomy as a result of impacted cerumen. And random street violence coming from behind. It was about a year before I stopped cocking my fist if anybody approached me too quickly from behind.
@ Orac,
Amsterdam and cyclists, that’s some combination. I think a Dutch lawyer started a case against the city council, because they should do more against cyclists, who didn’t follow the law. Alas he lost his case. (Actually, finding a cyclist who follows the rules, is something like finding a needle in a haystack.)
I still remember seeing a friend of my mother, a very civilised lady, cycling against the traffic, something that annoys the hell out of me and make me want to scream.
I can’t say I never cycle on a foothpath, but only if there are no pedestrians, or just one or so and I limit my speed to walking-speed.
Yes, I am well aware that not everyone has good hearing. I see many octogenarians walking on these trails. So, I should add, that I never hold it against pedestrians when they seem to do silly things.
Just yesterday there was a schoolgirl about fifteen years of age using part of the trail to walk home from school who was walking on the left side of centre of the trail and moved to over to the right side when I warned her by calling out “rider” (from past experience, many pedestrians don’t hear you coming and are startled when you pass them, so often it is for their benefit) and then followed up with “passing on your right”.
I always show pedestrians the utmost respect because they are doing exactly what I do – enjoying exercising on a nature trail – just using a different method. This is also partly so that they we remain on friendly terms because you often meet the same pedestrians repeatedly. I never reprimand dog owners for the actions of their dogs even when it is because they are not controlling their dogs. I understand that they prefer not to have their dog on a leash however impracticable.
OK, fine. Now, how long does it take for all those fancy electrons to be released? A few picoseconds, I’d guess, if the capacitance is of that order of magnitude. Not much point using a patch, then. Better just to rub an amber rod with a black cat fed with turmeric at midnight and apply it to the base of the victim’s skull, but there’s probably not so much money in that.
True, I suppose. There’s always someone willing to pay for witchcraft. After that it’s all down to the marketing.
Actually, Rich, while I do have several pieces of Baltic amber and some curry powder, I think that getting the semi-feral black cats to stay still long enough will be somewhat of a bitch.
I am reminded, somehow, of Doctor Science’s statement that you can generate animal magnetism by rubbing Amber with a cloth. But it all depends on Amber’s mood.
Handcuff, rope and the whip. Hand that to Amber and she’ll take care of your pain 😀
Not need for pain patches…
Al who’s finding it hard to type on a keyboard while being handcuffed and roped out
I was thinking, more efficient to sell 330 picofarad capacitors, with instructions to tape one lead to your arm and leave the other lead pointing into the air to “receive Healing Energy from the Life-Force Field.”
The leads would have to be curled up into little spiral curleycues, so the pointy ends weren’t sticking out, otherwise potentially serious injuries could occur.
Our competitors’ quantum healing patches quickly wear out. And they have to be applied as soon as you remove them from the packaging, otherwise the electrons wear off, much as overcooked vegetables lose their nutrition. But our Life Force Capacitors never wear out: they keep delivering Energy from the Life-Force Field, for as long as you wear them! You won’t ever have to buy another one, unless you lose yours or want to give them away as gifts.
Imagine going to a healing-woo convention and seeing people running around with capacitors taped to their arms, with curlycue leads sticking up.
That would be worth all the effort.
Damn, I really want to try this, just for the sake of seeing the pictures of wooskis with capacitors taped to their arms.
When the game gets old, shut down the website, post an official-looking “FBI notice” on the home page, and start spreading conspiracy theories about “government suppression of alternative medicine.” Then track the conspiracy theories to scope out how they propagate. A couple of years later, publish a story about the whole thing.
The leads would have to be curled up into little spiral curleycues
Please, no string theory.
The good thing about this ‘therapy’ is that you can have acupuncture for free if the cats is not amused by these shenanigans
OMG, I was laughing hard! Thanks…
Does anybody else see the direct self-contradiction? In quantum mechanics, multiple electrons must enter into a wave function as an antisymmetric superposition because they are fermions, which gives what is called exchange symmetry. The Pauli exclusion principle is a direct consequence of this. What I mean is that quantum mechanics says that electrons are so indistinguishable from one another that the wave function containing them is a sum of all situations where they have all individually traded positions in the configuration –at risk of repeating myself, literally because you can’t tell them apart.
Really kind of amazing: invoke quantum mechanics in the first sentence and then immediately posit a situation in the exact next sentence that quantum mechanics, by its very nature, says can’t happen.
Perhaps each electron is quantum-entangled with another electron in a much more advanced civilization’s hospital light year’s away where alien physicians do the choosing for us?
Apart from what others above have noted about this statement: The map is not the territory. We can compute the energetic signatures of the relevant molecules. We can store the results for hundreds of such molecules on media the size of those patches. That does not mean we have actually exposed the patient’s body to any of those substances–it’s more like taping a solid-state memory device (such as a thumb drive) to the body. And I suspect it is just as theraputically effective.
In addition, I have an instinctive distrust of quantitative statements based on color scales where they do not show me values. I refer to the diagram in which they claim to show a reduction of inflammation in a matter of minutes. How do I know that they haven’t fiddled with the range of the color scale between the “before” and “after” pictures? How do I know it is not the result of somebody walking off the street (or removing his shirt) and then sitting in an air-conditioned room for a few minutes? The one thing I do have to work with here is the relative levels, and I see that in general the parts of the body that have relatively high values of what they are measuring in the before picture have relatively high values of that quantity in the after picture. If these patches actually did anything, I would expect the parts of the body that are marked as having had patches put on them would see a greater reduction, and I am not seeing that in the diagram.
This is really just an expendible-buy-some-more variant of the magic-infused silicone wrist bands that first appeared several years ago. The magic in the wrist bands was better because it could make its way to the target site all on its own.
Someone made a ton of money off those silly bracelets; they re-named the basketball arena in Sacramento for them. I haven’t seen any ads for the bracelets recently, are they out of style or am I not watching the right ads?
Orac, I was wondering if you could comment on a new book by a Dr. Tom Cowan (he has been favorably reviewed by “Dr.” Mercola, if that helps 😉 ). He has polluted our local public radio current affairs program a couple of times and I would like to find a SBM review of his work to forward to the local news team. Here is a link to his interview (I don’t know if the booked the composting lady as an ironic comment).
I’m guessing that’s not the same Tom Cowan who used to play in defence for Huddersfield Town…
Whenever Orac highlights one of these scams, I always wonder how successful they are, how many people actually buy these things. All we can know here is that somebody with a sizable chunk of cash to invest thought this would be a winning item and funded the rather splashy (and definitely professionally produced i.e. not cheap) promotional video/website/etc. I’m pretty sure some of the past Friday Woo -ish howlers – e.g. Bill Gray’s coherence apps, QUANTUMMan – are no more than the failed pipe dreams of would-be alt-med entrepreneurial titans. The websites are ghosts that never get updated, or the LLCs are shuttered, other business records show no activity, or the proprieters are still busy working their day jobs, or something… But maybe the fact these things keep appearing is evidence that some of them have worked, well enough to encourage other overly-ambitious woo impresarios to try?
In addition to the expense displayed in setting up Luminas, I also interpret this as a straight-up scam, not the work of ‘true believers’. It’s not just the totally nonsense invocation of quantum physics. The clinching howler for me is the list of EVERY popular supplement “and over 200 more!” Gee, that’s a lot of energy to stick in one little patch. They must be using the quantum magic to keep all the different vibrations from all those different substances from interfering with one another or combining in a way that really f***s you up. And, yeah, I’m sure there’s an exhaustive complicated manufacturing process involved in charging the patches with electrons from each of those substances – which conveniently leaves “no active ingredients” in the product.
I hope Orac follows up on Luminas at some point in the future where it might be evident whether or not they’ve found a viable market and are making any money. If they do well that would be another drop of depressing news in the giant ocean of gullibility, magical thinking, conspiracy theory, denialism that seem to be pandemic here, at what is most likely the sunset of homo sapiens sapiens…
What with all this “quantum” stuff, I am sure they have someone on board to make sure they get it right. Probably someone even more qualified than Sean Carroll and Lawrence Krauss combined.
Probably someone even more qualified than Sean Carroll and Lawrence Krauss combined
Please don’t give them the “multiverse.”
Defibrillator pads…now those have some real electron charge. Why doesn’t Luminas sell those?
As I understand it, the typical defibrillator tops out at something around 400 joules. I re-ran the pad numbers assuming this time that the total charge was distributed over the area of the big pad, so the voltage would be reduced to only 4.56 gigavolts. For the total pad capacitance of 3663 picofarads, that works out to 3.8 x 10^19 joules – about a hundred million full charges for a defibrillator, or about 10500 kilowatt hours. Seems to me like that would really put the flame in inflammation.
Perhaps agencies in charge of airport security should be alerted.
OT but Mike Adams will soon be ranting…
( NYT) It seems that Alex Jones may have deleted evidence he was ordered to save of Sandy Hook conspiracy mongering he broadcast: the parents of the murdered children are trying to sue his pants off**, threatening his lucrative supplement, survivalist business.
He’s been taken off facebook, you tube and pirate radio.
But he’s available on Mike’s new
** which may be just but profoundly unattractive.
We’ve known about Acetyl-L Carnitine for centuries? Who knew!
But the reference to K2 is really what caught my attention. I wonder what the DEA will think of that after what happened in New Haven the other day.
Ah, so that’s what K2 is! I’d just assumed they were referring to the mountain (it seemed as reasonable as everything else). has a frequently asked question section:
A. Do not cut the patch. If you cut the patch, the charge will be lost and the patch will no longer be effective.
@ Narad or Denice Walter,
Is there a simple and an cost effect way to determine the validity of such a statement without purchasing the luminas patch?.
Please advise.
You brought it up, Michael. Either you come up with a clever work around, or you buy one, cut it, and see for yourself.
The rest of us have better things to do.
@ Panacea,
Thanks for doing better things! The patch must have been cut at one point in the manufacturing process. It must be one hell of a trade secret wherein a second cut completely destroys it efficacy. I know of only one other product that fails completely after being cut and that’s a water balloon.
I’m sure they’ve worked out that they need to cut before charging. But then again…
For anyone with even a basic layman’s knowledge of Quantum Physics, the nonsense in this claim is obvious. However, there are much less obvious forms of quantum quackery that can, and do, fool people even with advanced knowledge of QM.
The following video was made by Quantum Gravity Research. This organisation employs physics PhDs to do research into QM so it uses real physics. The problem is that its founder clings to the old “consciousness causes collapse” version of the Copenhagen Interpretation.
The vast majority of present day physicists who do favour the Copenhagen interpretation have long ago jettisoned the “consciousness causing collapse” nonsense because the evidence against it is so overwhelming.
However some have a vested interest in this idea and, as the video reveals, they cherrypick the science that seems to support their interpretation and ignore the multitude of disconfirming evidence. And they actually lie about there being no deterministic interpretation of QM.
Despite the backing from many physics PhDs doing this research, the idea is BS. The organisation was setup by Klee Irwin who made a fortune selling fake medical remedies. But it takes more than a smattering of knowledge of QM to see through it all.
If nothing else, it is a testament to the adage “sex sells” – watch it to see what I mean 😉
Beg pardon? I have little interest in “interpretations,” but that’s likely because I’m weary of MWI babbling around the Intertubes. It’s not “shut up and calculate,” but yes, the measurement problem is a real thing even if the Schrödinger equation is nominally deterministic.
If you can take it, check this out.
That link is to the ideas behind the group that calls itself “Quantum Gravity Research”. Forget about anything written by this research group. It is not peer reviewed. It is based on the underlying idea that consciousness collapses the wave function, a totally discredited concept that used to be promoted as an outworking of the Copenhagen interpretation but has long since been excised from that interpretation by the vast majority of today’s physicists on the basis of experiments in QM. Although the research is conducted by phd graduates, the organisation is actually funded by an individual who made his fortune selling medical scams to an unwary credulous public and who effectively believes The Matrix was a film about science.
Nice piece here, though (h/t Peter Woit). My few remaining neurons are never going to get it together to grasp geometric Langlands or representation theory, unless I start with Charles Pierce. I like his brother better in any event.
Unfortunately this is way beyond my pay grade. I don’t have any formal training or qualifications in QM, just enough to have a reasonably well developed layman’s BS meter for quantum woo (I hope).
I don’t understand your lack of interest in interpretations of QM.
The trick is to separate the physics from the interpretation. The fun is to see how some people who are committed to one interpretation or another denigrate other interpretations but seem blind to the problems with the interpretations they favour. For example, some people who favour the Copenhagen interpretation and criticise the MWI seem to be unaware that the Copenhagen interpretation is similarly burdened with its “multiple paths” which amounts to “infinitive paths” all of which much be traversed. And MWI is more parsimonious with one less assumption. Not that I support MWI, only that I find the discussion interesting.
And the Pilot wave interpretation is attractive because it is both mundanely physical and determnistic but, on the other hand, it requires the existence of global hidden variables which is problematic from the point of view of the very real evidence for non-locality, and it is at least incomplete because can’t account for special and general relativity. But who knows what future discoveries may yield.
I will read your link though.
I don’t understand your lack of interest in interpretations of QM.
Decoherence is decoherence. The interesting question from my viewpoint is whether GR needs to be quantizted or QFT needs to be superseded to get further. This requires experiment primarily, which, absent serendipity, requires connection to theory (or the other way around). I have no objection to the philosophy of physics, but at some point it’s just navel-gazing or worse.
^^ Let me try it this way: Are the “many worlds” a well-ordered (transfinite) set? If yes, which SR would seem to require, what’s labeling them? If no, then you’re just back where you started, whether the question is nonlocality or the simple emergence of classical behavior of quantum systems.
Dear Luminas,
In addition to ferking up all that quantum-y stuff, it would be Berberis vulgaris, not Vulgaris.
Carl Linnaeus
The parents donated $50,000 to the children’s hospital where she was treated, so I think they know who did all the work.
Follow-up: Apparently the child was first diagnosed with a primary brain tumour which is almost uniformly fatal and she was given weeks to months to live. However, a follow-up scan showed cavitation or cyst formation which created doubt about the diagnosis and therefore the tumour was biopsied. This changed the diagnosis to Juvenile Xanthgranuloma, which is essentially an abnormal proliferation of a type of blood cell called a histiocyte. So this was actually not a primary brain tumour. This changed the prognosis because these tumours can be treated and cured with chemotherapy. Her treating specialists were obviously still concerned because the site of the tumour and therefore were guarded about her prognosis. However, this is no miracle as it is being portrayed in the media. She was cured by chemotherapy administered by paediatric medical specialists.
Hmmm … an oxygen with two “electron binding spots” [???] …maybe it’s … WATER!!
ps://”> Luminas Pain Relief Patches: Here’s the excuse to show partially naked bodies.
I don’t see the picture but at least I got the description 😀
Yeah I tried for that was well, but the link disappointingly led nowhere. 😉
I’ve had Lyme disease for the past 4 years and just started treatment for that 2 months ago. To my surprise the patches do take away the radiating stabbing pain I get when my body is at rest.
I have to wear a lot though. 8 patches on my back and 3 per hand. It says they work for up to 3 days but I wear all of them for about 5 days and have success.
I put on 5 patches before bed one time and I was wired beyond belief and couldn’t fall asleep for the life of me.
The patches are really affordable the way I use them and are providing a lot of relief while this long 9 month treatment plan unfolds
Isn’t it pretty obvious that Dr. Craig Davies is a Doctor of Pro Sports? There are very few university medical schools (maybe Palmer College?) offering that degree, so I expect his services to be rather expensive.
Comments are closed.
%d bloggers like this: |
cce94f42cbc9714a | Carl Calleman’s Mayan Cycles Input (archive) to STAGE – CYCLES
Due to the major shift in mass consciousness, which necessitates finding the 777,000 ASAP, our focus has changed. Consequently, this stage needs to realign with the dynamic nature of spiritual operation. Since learning of the Nine Waves of Creation from an I.D.E.A. Director Dr. Carl Johan Calleman, it became obvious that his understanding of cycles held great value to our work. We are happy to announce the posting of his articles in Stage Cycles, which includes helpful information on the 36-day cycle in the Days and Nights of the Ninth Wave.
The Ancient Wisdom For Now Scroll page was revised. Full Circle: The Mysteries Uncloaked Globe D, & Full Circle: The Mysteries Uncloaked are now incorporated into Suzzan's new 2-volume treatise: AMERICA’S HIJACKED DESTINY...Vol.1 & Vol.2. Posting Carl’s insights help us navigate the Ninth wave and give new insight to STAGE CYCLES.
11 June 2022
Kab-Tree Click for Carl's 777Center Transformation page
In his new book, THE LIVING UNIVERSE: The New Theory of Origins, Explaining Consciousness, the Big Bang, Fine-tuning, “Dark Matter”, the Evolution of Life and Human History, our friend and one of our I.D.E.A. Foundation’s directors, Dr. Carl Calleman expands Quantum Theory to the Macrocosm. This inspired book presents a coherent theory of Life’s purpose, demonstrating that rather than our existence being the product of a random happy accident, our Evolution was orchestrated and driven by a fully conscious structure.
For thousands of years people believed the Earth was at the center of everything, proven by the many drawings depicting such belief. However, since Galileo discovered the Earth was among nine planets orbiting our sun, astronomers continued to “correct” that belief with high-powered telescopes, which eventually placed our planet and solar system on an outer arm of the Milky Way Galaxy. From that moment on, our existence became about living in a disinterested expanding universe, while facilitating Life’s arbitrary evolution.
However, with the development of quantum theory we learned that at the subatomic level, matter is anything but passive, because the components of atoms indicate consciousness. Then in the early 1980s Arthur C. Clarke introduced us to wondrous world of the Mandelbrot Set and Fractal geometry, in essence providing a visual expression of the “kingdom of God” within. At the time, most thought that science and spirituality was finally moving closer together, but alas consciousness remained confined to the Microscopic world for the next four decades. Carl’s theory takes quantum theory to the next level, endowing the universe with consciousness. Moreover, this consciousness has been driving the evolution of Life, with Earth as the end point, or center.
Interestingly, Carl also provides a new way to view astrology, when he says that stars and galaxies are conscious. Emphasizing this he presents two identical looking photographs, comparing “the structure of brain tissue and the filamentous”, or threadlike “organization of galaxies.” From this comparison, Carl concludes that there are roughly the same number of galaxies as brain cells in what he calls the “observable universe.” To my mind, this means that these systems are not merely a collection of solar systems, stars, and planets that affect us by their angles and positions. The stars and galaxies affect us consciously.
Full disclosure, as with needing Craig to describe our work electronically, scientific and mathematical equations also tend to go over my head. However, since having the privilege of helping Carl edit his book, The Global Mind: I am more acquainted with his precepts. Still, as with all his books, accommodating us scientifically and mathematically challenged readers, while systematically providing scientific data and equations for the academic community, he also explains his findings in layman’s terms. Consequently, the non-academic reader is able to grasp the book’s concept fairly easily. That concept is so sublime it makes one wonder why it took so long.
For instance, relating that established cosmologist Dr. Lawrence Krauss of Arizona State University, thought it “crazy” that there was a relationship between the CMB “Cosmic Microwave Background radiation” originating 13.8 billion years ago, and the current position and alignments of our solar system formed approximately 5 billion years ago. Puzzling over this, Professor Krauss asked:
Is this Copernicus coming back to haunt us? That's crazy. We're looking out at the whole universe. There's no way there should be a correlation of structure with our motion of the earth around the sun - the plane of the earth around the sun - the ecliptic. That would say we are truly the center of the universe.”
Carl answers “It would be ‘crazy’ because it is inconsistent with the view of official science that the Big Bang was a random and completely unguided event. And indeed, these alignments would put not only the Cosmological Principle and the spacetime of the General Theory of Relativity in question, but also the whole idea that matter was randomly distributed in the Big-Bang.”
We cannot recommend the Living Universe: too highly, as it explains our origins, not only giving purpose to our lives, but also providing solutions to the world’s problems.
Why might Putin’s Russia want to invade Ukraine? February 13, 2022
Posted February 13, 2022 | By: Calleman | Article by Carl Johan Calleman, Ph.D.
Making predictions based on the Mayan calendar always holds some uncertainty since after all the evolution of the world in this perspective is not deterministic, but based on the potentialities of different states of consciousness. Nonetheless, this calendar system does exhibit a large number of patterns when it comes to how the course of history has played out and sometimes those have proved to be useful for making predictions. For instance, the geopolitics that at the current time dominates the attention of the world is a potential invasion of Ukraine by Russia, with the purpose of either annexing it or making it into a vassal state. In addition to seeking to expand Russia to include a large territory that it has more or less controlled in the past, it may aim for such a subordination also because a relatively democratic Ukraine presents a threat to the authoritarian rule of Russia. Presumably to uphold this Russia has in recent years significantly modernized its military.
Moreover, in addition to the pandemic by which it has been especially hit, the current weakness of the United States based on its recent pulling out of Cenrtral Asia, as well as domestic division and tiredness of wars, makes this a good time for Russia to act based on its ambition to claim a role as a superpower in the world. Adding to this risk for war, there does not seem to be much room for negotiations at this point. The current government of Ukraine and the majority of its population want it to be a sovereign nation with the full right to sign treaties with other nations, whereas Russia considers it as a part of its security zone. While an invasion now seems likely, the same result of a subordination could however also be accomplished by a long-term isolation of Ukraine by military means. At least for now time is on the side of Putin. The purpose of this article is however not to report on such details that the press is already discussing. It is instead to discuss the deeper reasons existing in the underlying quantum field that may lead Russia to de facto incorporate Ukraine whether through an invasion or not.
In fact, the historical pattern of the Sixth Wave (so-called Long Count) of the Mayan calendar system clearly tells us that the energies this wave produces make a Russian incorporation of Ukraine in some form highly likely. To demonstrate this, Fig 1 shows maps of some of the historical movements in relation to the planetary midline at the beginning of the past 7 baktuns (a baktun is a Mayan time period of 394.7 years. The planetary midline that goes through Malmö, Berlin and Rome down to Cape Town is the 12th longitude East, which separates the Eastern and Western Hemipheres of our planet and is established on the level of the inner core of the Earth). Recently, in 2011, the fourteenth baktun in the Long Count began, but no map has as yet been added for this. The pictures show major military movements during the katun (19.7-year period) following a baktun shift in the Long Count.
The study shows a clear pattern where in the beginning of baktuns that are days (shown in the left column of Fig 1) expansions have taken place from the planetary midline, whereas in time periods that are nights (right column of Fig 1) Europe is attacked from the East. The pattern includes some of the most important movements in European and world history such as the rise and fall of the Roman empire, the rise and fall of the British empire, as well as the beginning of the dark ages and the Mongol Storm. which created the largest empire in human history.
Fig 1, Major movements from and towards the planetary midline at the beginning of the later baktuns of the Sixth Wave.
It should be noted that this pattern of military movements from and towards the midline only becomes apparent in light of the shift points in time between the different baktuns in the Long Count. Other calendars than the Mayan Sixth Wave would thus not reveal a pattern of alternating energies generating movements in different directions of this kind. It does provide an explanation to the fact that not only the Maya, but other ancient peoples as well recognized different powers in the four geographical directions and that these do play a role for the course of human history. The line through the twelfth longitude East thus serves as a kind of wave generator for the evolutionary process of the Sixth Wave which ultimately has produced the different mentalities of the East and the West. That Europe has had a disproportionately important role in world history is explained by this energy line in its midst.
The very fact that this pattern of alternating movements exists also shows that history indeed is driven by an underlying wave movement (called the Plumed Serpent by the ancient peoples of Mexico and cosmic serpents or dragons in many other ancient cultures). The results of this wave movement on the emergence of major empires in the Mediterranean/European context is shown in Fig 2, which highlights the dominating empires in periods that are days (peaks in the wave). While much of world history can be explained by this wave movement events outside of the European context will be left out here because it is not relevant to the current discussion of Russia and Ukraine. Yet, it should be pointed out that not only does this wave explain the rise of civilization at its very beginning, but also the commonly noticed phenomenon that the pendulum swings in history and produces cylical phenomena on a large scale. Those that want a deeper understanding of the evolution of consciousness (the waves), and how the various structures of the mind are downloaded from the hemispheres of the Earth to the brains of humans and how this in turn shapes human political and military behavior (and much else) may be referred to my books and courses, which also discuss where these waves come from. In short however, the waves provide human beings with states of consciousness (structures of the mind) and as these states are altered the human perception of the world will undergo change. As the human perception changes, the kind of world humans create will also change and this is why a wave movement of consciousness underlies the rise and fall of civilizations. The human brain then is looked upon more as a receiver of cosmic waves than as an isolated generator of thoughts and actions.
6th-Empires match
Fig 2. The rise of major empires in the Mediterranean European context resulting from the peaks in the Sixth Wave.
Fig 3. The Rise and fall of the British Empire and the US in the seventh day of the Long Count.
Fig 3 (which is a detail of Fig 2) adds another perspective on the current conflict, which is that the 7th day (thirteenth baktun) which fostered Western power and dominance in the world now (since 2011) is over. Hence, as we have now entered the fourteenth baktun, the power of the West has been in constant decline. In case of the United States its military reach into Asia, to Iraq, Iran, Afghanistan and more broadly to Central Asia, has, regardless of who has been president, been in decline and there is little to indicate that this trend will be turned around. In a sense, the current Ukraine-Russia conflict is thus also a reflection of the altered US-Russia relationship resulting from the shift from the seventh day (ending in 2011) to the seventh night. Putin now senses that Western power is now on its way down and that this may be the time to assert itself and settle a score. What score? Personally, I think Russia does have a reason to be bitter at the West, because it failed to come to its aid as the Soviet Union collapsed in the early 1990’s, when death rates increased, birth rates collapsed and poverty and crime went rampant in a very disorganized economy. Putin has become seen as the person who created stability and ended the extreme hardships of critical situation in the 1990’s and why he often repeats that the West cannot be trusted. The US actually failed to do so at the time and this is something that is now coming back almost like a karmic reaction as its power is weakened in the seventh night.
If we go back further in time to Hitler’s so-called Operation Barbarossa, which was the name for the military campaign aimed to destroy the Soviet Union, we may also understand why Russia is especially sensitive to losing its ties to Ukraine. One of the primary objectives of this operation was to turn Ukraine into a bread basket for a growing German population and at the same time cut off its food production from Russia with the intended result of causing death by starvation to tens of millions of its inhabitants. In a country which never forgets that it played the decisive role for the defeat of Nazi-Germany such things still carry emotional weight and add to the suspicion of the West.
Returning then to the particular topic of this article: As mentioned, what has not been included in Fig 1 is a map of the time period 2011-2031 CE that we are currently in, which should be map 1h. This is because the beginning of the 14th baktun that we are currently in has not yet manifested in a military movement of a major strength to qualify. Yet, it should be clear that in this way of presenting the beginning of the most significant shifts in the Mayan calendar over the past three thousand years, a Russian invasion of Ukraine in the beginning of the 14th baktun would fit perfectly into the pattern and mean such a significant change to fill the gap in Fig 1h. Such an invasion (or incorporation of Ukraine otherwise) would follow upon the storms of Huns (Fig 1 d) and Mongols (Fig 1 f) emanating from the East towards the planetary midline that took place at the beginning of the previous nights. To add to these parallels, it should be noticed that it was the Mongol storm (in Fig 1 f) that set an end to the Kievan Rus, the original slavic civilization in Ukraine and incorporated it into the empire it created. This collapse of the Kievan Rus makes the parallel to a potential Russian invasion of Ukraine even more direct. To further highlight the parallel of the Mongol storm from East Asia at the beginning of night 6 to the current situation you may consider the recent strengthening of the friendship between Russia and China, with the potential of creating a Eurasian block.
The Kievan Rus civilization was founded by Vikings that moved into the Russian river system in the 9th century (see Fig 1 e), but grew into what is considered as the original East slavic civilization. It is however generally thought that the subsequent conquest by the Mongols (Fig 1f) for several centuries blocked the development of Slavic nations based in either Kiev or Moscow. In a sense current day Russia and Ukriane thus share a common trauma, but also an origin in the Kievan Rus, which adds strong emotional fuel to the situation. In accordance with the wave movement shown in Figs 1 and 2 (this is actually the same wave movement looked upon through different angles) it would thus seem highly likely that Russia at the current time would seek to incorporate Ukraine. In this conflict, my personal sympathies go essentially to Ukraine (which I have visited) because even if its democracy is far from perfect it at least in contrast to Putin and the Russian oligarchs has such ambitions. I am however not very optimistic about the possibilities of Ukraine to be able to effectively resist being reincorporated by Russia, as it for centuries has been part of the Russian Empire, whether this has been Tzarist or Soviet.
Given the power of the wave that has created the pattern in in Fig 1, it does not seem very likely that the sanctions that may be set in by the West would have much of an effect on Russian politics, even if they could turn out to be painful for its oligarchs and people. To this should be added that also in the West (because of the shift to a night in the Sixth Wave), support for democracy has dramatically eroded in the past few years resulting in a decreased ability to support Ukraine. The importance of the conflict, and the authoritarianism versus democracy aspect, to the rest of the world is evidenced by the fact that several key members of the Trump administration were actively playing a role in it. President Trump never criticized Putin and even directly expressed his admiration for him and certainly threatened to withhold US funds to Ukraine unless it investigated Joe Biden’s son. Several key persons in the previous administration such as Paul Manafort, Michael Flynn and Rudy Giuliani have also played roles in the conflict usually to the detriment of Ukraine and hence of democracy. It thus seems that the current threats from Russia against Ukraine is part of a global assault on democracy and its repercussions may also have a direct bearing on the future of democracy in the United States and elsewhere.
An expansion of Russia towards the West through the capture of Ukraine would almost certainly amount to a significant shift in military power favoring Russia at the expense of the West. Its weight would increase not only in Eastern Europe, but also further West, where it has already started to throw its weight around so that West European countries may in the time ahead see themselves forced to rearm. Based on this there seem to be reasons to expect a geopolitical restructuring of the Eurasian continent favoring the powers of the East, Russia and China in a sense creating a new kind of Silk Road. What we are talking about is thus not only a small change in the geopolitics, but something whose consequences may fundamentally come to alter the world.
The reason all of this may come to happen would thus be the power of a wave movement described by the Mayan calendar system and a significant baktun shift in this in 2011 that most people today are oblivious to. For those that expected some kind of immediate change on December 21, 2012 the take home lesson is that the Mayan calendar describes longer term processes, in this case a wave with a periodicity of about 800 years, which modern people usually do not take into consideration. Since the Mayan calendar is based on the invisible quantum field (in contrast to other calendars or astrology) it is not cyclical in the sense of identically repeating events, but wave-like, so that similar kinds of movements or events recur with a clear periodicity and this is what Fig 1 shows. This in turn means that even if many people may unconsciously sense the changes in energy and power that go on in the world, to get a complete picture it is necessary to analyze history in light of such energy transformations where the relatsionships between East and West play a very significant role.
Regardless, it is because of this underlying wave pattern that time is on the side of Putin whether there will be an actual invasion or not. The risks for the rest of the world are that a subordination of Ukraine could come to mean a boost to dictators, political strongmen and authoritarian forces everywhere, not only in Europe, but maybe even more so in the United States. Whether such a situation of dominance will last for long will depend on to what extent people everywhere, including in Russia and Ukraine will be able to make the quantum leaps to the higher Seventh, Eighth and Ninth Waves and come to see reality through such higher filters. As I pointed out earlier, these waves are quantized and their effects are hence not deterministic, but potentialities. The higher waves still have a potential for creating democracy and prosperity the many and even spiritual renewal, but it would take a conscious intention and effort to manifest such phenomena in the world.
What is Macrocosmic Quantum Theory?
In order to explain what Macrocosmic Quantum Theory is, it may be best to give a short overview. The latter theory was developed in the beginning of the 20th century in order to explain the nature of matter and some of its interactions with light. The German physicist Max Planck was the first to suggest that radiation from black bodies could be expressed as energy packages or what he would call quanta or a quantum in singular. Especially at a time when many physicists believed that all major problems in their science had been solved, Planck hesitated to publish his proposal, but since this was the solution that made sense mathematically, he went ahead with this. Einstein however in 1905 used the idea of quantization successfully to explain the photoelectric effect in after this it got increased momentum. Theorists then increasingly became aware that the microcosmic world functioned according to different principles than our everyday world. The most important step in the direction of a quantum theory that was then taken by Niels Bohr through the atomic model he published in 1913. The success of this model led scientists such as Heisenberg, De Broglie and others to further expand on the concept of quantization and especially with Schrödinger’s wave equation quantum mechanics a model of the atom essentially what it is today was developed.
Schrödinger’s wave equation describes how electron orbitals are defined by specific geometries, which we can identify as quantum states. Hence, quantum mechanics defines how geometries organize energy, and the properties of all matter (all elements) is created by the build-up of such quantum states to higher levels. A shift from one such state to another is called a quantum leap and the energy needed to bring about such a leap is called a quantum. A quantum leap is instantaneous and in contrast to in Newtonian physics shifts in potential energy do not take place gradually or continuously. To further highlight the differences between Newtonian and quantum physics it might be added that the different electrons in different quantum states are entangled and that their effects are non-local implying that everything in the universe is connected through an underlying quantum field.
Schrödinger equation
Fig 1. According to the Schrödinger equation quantum states are energy distributions organized
in accordance with specific geometries (in this case for the hydrogen atom.
Periodic Elements
Fig 2. The Periodic System of the Elements is created by the build-up of quantum states
adding to the most basic ones in the Hydrogen atom.
The above was a short summary of microcosmic quantum theory, which is really the only form of quantum theory that is accepted by established science. For a long time modern physics has however tried to unify the quantum theory that describes the microcosmic realm well with Einstein’s theory of general relativity which describes the macrocosmic space-time and gravitational phenomena. Attempts at such a unification, sometimes talked about as a Theory of Everything, have however not been successful and according to the official world of science the microcosmos is described by one theory with one set of basic premises – quantum mechanics – while the macrocosmos is best described by another theory – general relativity – with another set of basic premises.
This state of affairs is obviously not very satisfactory. If we take as a starting point the common intuition that
everything is connected to everything else, then we would expect the macrocosmos and the microcosmos to function in accordance with some common principles because how could you argue that the two classes of phenomena were connected otherwise. In order to create a unified theory about the workings of the universe what I have then done is to develop a macrocosmic quantum theory, which is not identical with the microcosmic theory but shares many basic principles with this that may help us understand how the universe works. Macrocosmic Quantum Theory in fact provides a complete explanation to the evolution of life in all of its aspects ranging from the first appearance of bacteria to the current development of artificial intelligence. It shows that the universe is designed to create life and how this happens through macrocosmic quantum shifts in the center of the universe. As it turns out such a theory is not just about the properties of dead matter (such as mass), but about the evolution of life, which ultimately finds its origin in quantum shifts.
At the time, when I started to think about how the universe evolves there would be a consensus in the scientific community that this would happen through events of a physical nature that were randomly dispersed in time and Darwinism would purportedly explain this when it comes to biological evolution. Similarly, when it came to the historical evolution of mankind with its technological, social, religious and other mental aspects, the changes that have taken place have always been explained by phenomena of a material nature. An explanation as to why there is an evolution in the first place was however clearly missing, but as I started to study the ancient Mayan calendar system this made evident patterns in the evolutionary process that pointed to common principles for this regardless of what form of evolution was considered. Certain sources were important for me to come to this realization. José Argüelles book The Mayan Factor in very broad terms demonstrated that historic evolution was related to the various baktun shifts in the so-called Long Count, the Long Term chronology of the Maya. Moreover, Freidel, Schele and Parker’s book Maya Cosmos showed that there were a number of other such calendar counts that provide a framework for evolution at several different levels including the Big Bang, the primordial quantum shift in the history of our universe. These observations compelled me to study how the various events in the history of the universe matched up with the various shift points in the Mayan calendar system, which I spent a few years studying in the time period 1993-1996. The results were stunning in their consistency and it became clear to me that evolution in all of its aspects follow wave patterns. These patterns could be summarized in the Periodic System of Evolution, which while it is different from the Periodic System of the Elements in Fig 1 shares some very important commonalities. The first is that both systems are periodic and the second is that they are based on the build-up of quantum states. Similarly to how there initially (1800’s) were gaps in the system of the elements, there remain gaps in the periodic systems of evolution (notably in the 3rd Wave), which hopefully continued research will be able to fill in. It may be obvious from the Periodic System in Fig 3 that events in the history of the universe do not occur at random point in time, but seems to fit intoou what ymight call a cosmic plan.
Fig 3. The Periodic System of Evolution (up until 2011) is created by the wave movement of seven peaks and valleys (column 1) in Nine different waves. Hence, while each wave develops a specific and different kind of phenomena and has a unique frequency, the process from “seed” to “mature fruit” in seven plus six steps is identical and this is why there is a periodicity.
(Note: the 8th and the 9th waves are not included in this Table. They have so high frequencies
that it becomes difficult to identify peak by peak their specific manifestations)
While this pattern might have been looked upon as a novelty to modern people it is quite consistent with the calendar system of the ancient Maya, who cut its essence in stone in the pyramid of Kukulcan at Chichen-Itza. Nine levels (quantum states of consciousness) developed by Serpents (waves) of seven peaks and six valleys each (see the shadows on the staircase of the pyramid in Fig 4.
Fig 4.
The nine-storied Pyramid of the Plumed Serpent in Chichen-Itza with the seven triangles of light and six of darkness
(picture taken at the spring equinox).
Fig 5.
The 1st Wave creating the first monocellular organisms on our planet. The peaks mean that steps forward in the creation of life are taken and their beginnings fit very well with what we know about the timing of the appearance of the first cells. Even if we do not know what the simpler life forms (preceding the formation of our earth 4.5 billion years ago) would look like the wave took its beginning before the Big Bang and we can thus conclude that the potential of creating life in the universe goes back to its birth.
This multitude of correlations was of course on a collision course with many of the fundamental assumptions that modern scholarship and science. Depending on a person’s preferences it may be looked upon as an advantage or disadvantage that the Macrocosmic Quantum Theory is consistent with an ancient world view of Serpent creator gods. As an example of such a wave I show the 1st Wave (the lowest level of pyramid), which created the earliest life forms on our planet, but really originates already at the Big Bang. This implies that this universe exists in order to create life and that we are not here by accident but as part of a much larger cosmic scenario.
The purpose here is just to convey some essential features of Macrocosmic Quantum Theory, and for details I refer the interested reader to my books, which respond to possible objections etc. What should be pointed out also here however is that in this theory all forms of evolution (whether galactic, biological, spiritual, mental or technological, etc) are explained within the same conceptual framework. The various waves then convey different quantum states form the cosmic center that human beings develop resonance with and absorb. This explains essentially all aspects of human history and Fig 6 shows how the human relationship to the divine has been altered depending on what quantum state has shaped the human mind.
A few of the quantum states in Fig 6 were described by the ancient Maya in terms of their geometries, who also provided a cosmic source for these in what they would call the “Place of Creation”, the “Raised-u p-Sky-Place” or the Tree of Life. The dark fields in them create what shamans would call veils limiting the ability of humans to see the full reality. The point to realize is that as human beings develop resonance with these states they will also start to see reality through the types of filters these provide. So for instance when people primarily resonated with the 6th Wave, they saw the world through duality favoring the left brain half. As a result, a patriarchal mentality dominated the world for approximately 5000 years and only as the higher waves have later been activated this particular mentality has been transcended. The steps between these different quantum states are however quantum leaps and humanity has not yet completed the last such leap, that to the 9th Wave, which only became accessible in 2011 BCE.
Fig 6. The basic Macrocosmic Quantum States of the higher waves, their times of activation and
the resulting influence in the Human relationship to the Divine.
The “Great Reset” of 2021, November 21, 2020
Posted November 21, 2020| By: Calleman | Article by Carl Johan Calleman, Ph.D.
The spiritual community of the world has for some time now heralded the idea of living in the present moment and to “Be Here Now.” This ideal, of not mulling over the past or constantly focusing on what is to come makes perfect sense as long as you and the world has the wind on the back and is evolving forward. But what about when this no longer is the case and it feels like in some profound sense the world and you are no longer moving forward? Then it may be necessary to get a grasp of exactly what drives the evolution of humanity in order to find a successful way out of an established time-line. This is typically what happens when you are entering a dark age, where the light is not as easily accessible as in a light age. One of the most typical characteristics of a dark age is that people are not automatically moving forward. In Dark Age things come to a halt and I think many can relate to this at the current time.
So what is a “Dark Age”? What is its origin and how does it differ from an age of light? These are questions that cannot be dealt with adequately from the ideal of living in the present moment, but to have answers requires studying the time-lines of evolution going from the past to the future sometimes in a long perspective. Such a study is necessary if we come to a dead end and want to find a time-line that allow us and the world at large to move forward. In my own view the only time-lines that allow us to get a full understanding of the evolution of consciousness and how this manifests in the present world are those that go back to the calendar of the ancient Maya. In contrast to for instance common astrology, these evolutionary time-lines are not just repeated cycles, but actually have directions and are going somewhere actually towards infinity. Moreover, these time-lines are not based on man-made phenomena, but are inherent in Creation. They go back to the origin of this creation, when neither stars, nor planets or human beings existed and through this we can have some guarantee that they are related to its very purpose. Naturally, at the present time human beings are needed to interpret this purpose, but the time-lines themselves are not human creations and so they have a deep significance, which goes beyond whatever particular interests human individuals may have.
The critical point that I feel so many modern people have failed to understand is that these time-lines are not straight lines. The consciousness that creates the evolution of the universe, including that expressed through the human beings, is instead a wave form. Time, or if you like the illusion of time is created by such wave forms and so we will not be able to understand how the world evolves without taking these into account. These wave forms alternate between peaks and valleys, or if you like periods of light and darkness (which may be referred to as days or nights even if they cover much longer time periods than common days and nights). Only to the extent that there is a widespread understanding of these cosmic wave forms (which are not physical in the traditional sense) will there be a great awakening. As long as there is not an understanding of their nature, people will attempt to live their lives based on the premise that everything will or should stay the same or develop in a linear way. If it does not, many will presume that there is something wrong, while in fact the wave-like evolution of the universe is part of its design.
As a consequence of this wave-like nature, to live in the present moment only makes sense in a universe that does not undergo cosmic quantum shifts and does not evolve in a wave-like manner. In a series of books I have then also argued that the dragons, cosmic serpents, (including the Mayan/Aztec Plumed Serpent) are actually cosmic quantum waves that drive evolution and that such a world view is more true than the current one based on the idea of a linear uninterrupted progress. There is a reason that the most prominent pyramids in the Western hemisphere was dedicated to such a wave, the Plumed Serpent seen as the bringer of civilization and that Chinese parents to this day especially treasure children born in the year of the dragon. I believe the worldview based on linear progress also in today’s world is now coming to an end. In reality the periods of ups and downs (or forward movement and rest) are inherent in the way the universe is designed and not of human making. These waves are the consciousness of the universe and go back to a time when nothing physical existed.
Fig 1, The rise and fall of major empires/civilizations originating in the Mediterranean/European context as matched to the time-line of the Sixth Wave.
One such wave is shown in Figure 1, the Sixth Wave, which is mostly referred to as the Mayan Long Count. This describes the rise and fall of major empires/civilizations, and it has a lot to tell us about what is going on in the world today. This seems clear enough but the reason that the evolution of consciousness sometimes may seem more complex than what is given by such a single wave is that it is created by an interference pattern of nine different waves (or time-lines). Hence, we can never gain a complete picture of changes in this world unless we take all of these waves into account. Yet, we can analyze them one by one and then reconstruct what is happening on the level of human consciousness.
Fig 1 allows us to get a clear definition of what a dark age is. Dark Ages are the valleys between major civilizations that I have previously called nights. It should from this diagram then be clear that the dark ages are inherent features of the evolution of the universe. Despite superficial appearances they are not human creations as their timing have a cosmic origin mediated by the wave. The darkness in a dark age then does not necessarily mean a time of evil (even if there are times when humans turn them into that). Rather it is part of a pattern of death and rebirth. From Fig 1 we can then see that the most well-known example of a dark age, the time period AD 434-829 following the collapse of the West Roman empire, is just one example of a dark age. Another not so well-known, but still by historians recognized such time period is the Greek Dark ages, 1149-749 BC after the fall of Mycenaean and the Minoan cultures in the so-called Bronze Age Collapse. So maybe this tells us that if we want to understand things and have some power to influence the course of events it is a good idea to look at things in a larger perspective and that living in the present may not guide us fully to this.
Looking at the larger perspective becomes even more relevant if we notice that as of AD 2011 we ourselves are living in a dark age following the period of dominance of Western empires, notably those of Great Britain and the United States in the preceding era of light, 1617-2011. As I have argued in a couple of articles the current situation in those particular countries, and to some extent in the rest of the world is a reflection of a shift in the above time-line, and if we want to understand what is happening we will have to recognize that this time-line is a wave and not a straight line.
As I have argued earlier, part of the effect of this night in the Sixth Wave is the Covid19 pandemic, which has hit the Western hemisphere much harder than the East. Some very recent events are also fully consistent with with the decline of the West in the time-line in Fig 1. One is that the questioning of the presidential election results (not the result by itself but the questioning of it) has meant that the United States have lost much of its role as a beacon of democracy. The United States is now also abdicating much of the dominating role in the world that it held in the previous era. The Transpacific Partnership (of which the US was part), which was annulled by Trump has thus for instance now been replaced by a huge trade organization where the United States is not taking part essentially meaning that China will gain the leading role in trade in this part of the world. Similarly, the further withdrawal of troops from Iraq and Afghanistan means that the US is abdicating the role it has had in the Middle East and inviting, Russia, Turkey and/or Iran to take over. I leave aside here whether these steps were good or bad for the world and am just pointing out that things are developing in accordance with Fig 1. I think it is reasonable to expect further such abdications of the role of the United States in the immediate time to come.
Ironically then, despite the fact that Trump campaigned in 2016 under the slogan of Make America Great Again, his reign by any objective standards seems to have accomplished the very opposite. While he and his followers are mostly seeking someone other than themselves to blame for this, whether it is the democrats, China, the WHO, a global cabal or something else, I suggest that there are larger forces in play here, forces that are inherent in how the universe was designed, namely the shifts between days and nights and the ensuing evolution of consciousness.
Certainly, the ancient peoples of our planet recognized that evolution was not linear, but subject to death and rebirth and that no individual, or nation for that matter, would escape this. However, if people are not aware of the wave-like nature of evolution they will just continue to push agendas that are dead ends indefinitely expecting things to stay the same (in this case meaning that the Western dominance of the world would continue). I think the lesson from this is that the evolution of consciousness is a much more powerful force than the thoughts or desires of the human beings, and this goes for all of us. The wave-like evolution of consciousness takes it course regardless of what anybody thinks. It has a stronger power than human thoughts and this is even more so if people are unaware of how its course is designed.
However, there are as I said earlier nine different waves, or if you like time-lines, and we as humans may be in resonance with any or all of them by means of which an individual interference pattern that crafts our path in life is created. It is possible for us to make quantum jumps between such time-lines and to the extent that we do we will also make quantum jumps between different states of consciousness. This in turn influences, or even shapes the kind of thoughts that we have. I will however not here talk about all the nine waves, but only add the influence of the Seventh Wave to that of the Sixth Wave that was shown in Fig 1. The result is an interference pattern of the two waves, which is directly relevant to how things will evolve in the near future.
While the Sixth Wave above describes the ups and downs of the overall long-term evolution especially of Western civilizations, the Seventh Wave, which was activated in AD 1755 describes the evolution of several special aspects of these after this point in time. What I will focus on here are only the systems of governance and the phases of economic growth. The activation of the state of consciousness of the seventh wave has for instance led to the birth of democracy (Fig 2) and industrialism (Fig 3), phenomena which are some of its hallmarks.
Fig 2. The evolution of democracy as a function of the peaks in the Seventh Wave.
Fig 3. The evolution of the industrial economy as a function of the peaks in the Seventh Wave.
In Figures 2 and 3 there is an enormous amount of information and I will here limit the discussion to what is relevant for our current time. Hence, before the consciousness shift brought by the Seventh Wave in 1755 there were essentially no democracies in the world and for most of the timeline in Figure 1 the empires that it created were based on dominance, brutality and subjugation of the poor peasants and the slaves. Only with the activation of the Seventh Wave does a process begin when – notably in its peaks – steps are taken in the direction of increased equality, democracy and respect for the dignity of the human individual. As we can see, this is not a result of a straight line, and even if the process is complex its overall direction is quite clear so that by the end of seventh peak in 2011 only a few monarchies remain, which with the possible exception of the British and Thai are completely powerless. Inherited privileges of the nobility that used to be the norm has disappeared and republics where in principle all individuals have equal rights have replaced them. This is a result of the peaks or what you may call the periods of light in the Seventh Wave and the influence those had on human consciousness. In parallel with this development towards democracy there has also been a development towards increased globality. What in the Sixth Wave was a world of strictly separated nations has in the Seventh become a world that is integrated and where what happens in one corner of the world affects it all. This was not the case before the activation of the state of consciousness of the latter.
Similarly, before the activation of the Seventh Wave in 1755, there was no industrialism anywhere in the world based on the use of anything but muscle power. Step by step in the peaks of this wave fossil fuels and later electric and nuclear power have however come into use together with their many industrial applications. Mass production of goods, say for instance of cars, were unheard of before the shift in consciousness to the Seventh Wave had taken place. What humans may create in terms of technology is hence a direct consequence of this particular state of consciousness, but since this is something I have dealt with in books I will not detail it here. Naturally, the results of these steps forward in technological development has also been times of economic growth in contrast to the depressions that especially towards the end of the wave have been associated with the nights.
An important thing that we can see from Figures 2 and 3 is that there is a parallel between the Sixth Wave (Fig 1) and the Seventh Wave (Fig 2) in that the movement forward happens during the days. Moreover, not only the Sixth Wave, but also the Seventh Wave has its nights or dark ages, where the one that most clearly stands out in the Seventh is the one between 1932 and 1952. Hence, the rise of nationalism including fascism is something that happens especially during the nights as democracy then tends to be suppressed. Nights also tend to cause economic depressions.
I suggest that the world can now (2020) to a large extent be understood as a result of the interference pattern of these two waves and the state of consciousness this creates in the human beings. Since in both waves we are now in dark ages it is not surprising that the world at least partially now has come to a halt. On the surface this halt is of course caused by the Covid 19 pandemic, but I would at the same time argue that this would not have had the effects that it has had unless it fitted into the consciousness field that these two waves combined have created.
So what does all of this mean for us now? Well, we can understand why Western dominance of the world now seems to be coming to an end. We can also understand why especially after 2016 there has been a rise in nationalism and distrust in democracy among large groups especially in those countries, which has led to a concomitant rise of political strongmen acting nearly as dictators. This is in marked contrast to the time-period 1992-2011 when democracy seemed to spread everywhere. Again, the time-line of the seventh state of consciousness is not a straight line but a wave.
Diagram Description automatically generated
Fig 4. The current wave period in the Seventh Wave.
But what can be said about the economic situation? To address this I think it pays to look at where we are now in the Seventh Wave more in detail (Fig 4). One thing I want the reader to notice is that midnight in the current wave period 2011-2031 is not in 2020, but the year 2021. Thus, if there will be a Great Reset this will rather take place towards the end of 2021 than in 2020, although many have thought of 2020 as the critical year. When it comes to how the world economy has developed, this, as I mentioned has been in depression during the nights. This then should be true also for our current wave period even if this only came about in 2020 and may be even more troubling in 2021.
Something that I believe may set the current depression apart from depressions in earlier nights is that the current crisis with all clarity has demonstrated the fictitiousness of the value of money. Even if money in the present world is power it really has no substance and just amounts to digits in bank amounts or elsewhere. In the current situation the United States has for instance simply printed up trillions of dollars in support of its population during the Covid19 pandemic and similar actions have been taken in Europe. Rightly so I think as also fictitious money may have power. It may however also have begun to dawn on many that money can be generated out of thin air if the political will is there. What determines for what and how much money shall be printed is simply power.
Yet, the ability to print money on a large scale will only be possible in a country that maintains global power that as I pointed out above is now highly questionable regarding the US, which holds the world’s primary currency. For this reason, and the convergence of the processes I discussed above in the Sixth and Seventh Waves, I think we have very strong reasons to expect a collapse of the US dollar in 2021 and its replacement with something else. This among other things is what I refer to as the Great Reset of 2021. A change in administration will hardly alter this perspective. Exactly how the Great Reset 2021 will play out is unclear, at least to me, but I think I have outlined the evolution of consciousness leading up to it. I admit that I may not be using the Great Reset in the same sense as its originator Klaus Schwab. Yet, regardless of political views it seems clear that after this pandemic the world will need to be reset in order to adapt to an entirely different interference pattern created by the waves. Hopefully, this reset will happen in accordance with the interest of the large masses of people and not those of IT tycoons, but this still remains to be seen. It may thus be time for everyone to think about what such a reset should mean. I feel this should be done in the light of the fact that from September 6, 2021 (midnight) and onward there will be a climb to July 16, 2031 (dawn), which holds the potential to generate a world where democracy is again cherished and there is global collaboration to tackle the world’s problems.
Will the Covid19 pandemic come to an end if Trump loses the election?
October 26, 2020 | By: Calleman | Article by Carl Johan Calleman, Ph.D.
"In a couple of articles earlier this year I pointed out that psychological and existential factors are almost completely ignored when it comes to understanding how pandemics, such as that of Covid19 has affected the global population. Yet, it is well-known that adverse medical conditions, so-called co-morbidities other than the virus itself have contributed to the severity of the disease and why should existential factors be excluded from these? For many other types of diseases stress factors of a more psychological nature is known to strongly influence their onset. Cancers for instance, which is generally thought of as a disease with a physiological etiology is more common in singles than people living in stable marriages, which implies psychological or at least life style factors for the risks. How severely or often a disease manifests thus is not independent of the general life situation or outlook on life of an individual. There is no particular reason to believe that the Covid19pandemic should be different in this regard.
Since for the most part pandemics hit in dark ages it is relevant to look at the broader cosmic context and relationship to the evolution of consciousness of this one. Most importantly in this, as of October 28, 2011 the quantum state (of the Sixth Wave) that created the Western dominance of the world came to an end and will not come back in the next four hundred years (Figs 1 and 2). Because of this very long period, the effects of the quantum shift are in significant regards irreversible but also slow in coming. This downturn of the West (affecting primarily the US and the UK, but also other western European powers and less markedly the Western hemisphere in its entirety) does not come about because of some actions that these parts of the world have taken, but is a direct consequence of the wave-like evolution of the macrocosmic quantum states. Ancient peoples were aware of this wave movement and would let its ups and downs be symbolized by the Plumed Serpent or the Phoenix Bird, symbols of death and rebirth. Yet, today the shifts in the quantum field are almost completely unknown to the population at large who think of history as a linear process where the West would forever maintain its role gained in the 7th day of the Sixth Wave (see the Mayan Long Count in Fig 1)."
7th day of the Sixth Wave
Fig 1. The rise and fall of civilizations in the Long Count.
This lack of knowledge and the absence in our educational system of how the evolution of the universe on several levels takes place makes it difficult for many to see what is now happening in the world as the results of macrocosmic quantum shifts. Yet, the end of Western dominance was predictable and I wrote about it in my book The Nine Waves of Creation before any visible signs of it had really appeared. Regardless, since this downturn is not explicable through a linear understanding of history many have now chosen to explain it by conspiracy theories, while we are in fact witnessing the effects of a consciousness shift originating on a cosmic scale as the 7th day has turned into a night."
In Fig 2 we can in more detail see how the quantum shift in 2011 has parallel effects on both the global and individual levels. The wave movement in Fig 1 was for instance expressed not only in terms of western dominance in relation to the rest of the world, militarily and economically in the time period 1617-2011, but also in terms of left brain thinking (the scientific revolution took place in Europe at its beginning). At the same time as the light went out in the Western Hemisphere in 2011 similarly by resonance the light also went out in the minds of the human individuals resulting in much anger as we could see in 2016. Hence, as Western dominance began to go out on the global level the duality (see Fig 2) that has created the ego also began to go out so that a mentality of equality and balance is over time poised to take over (but presumably not without resistance). While the period that began in 2011 definitely has a Dark Age character this is thus not necessarily only negative but it has a purpose. It serves to rebalance the world after a long period of left brain/Western dominance. The task then becomes to keep what was beneficial and discard what was not from the preceding time period. In Fig 2 we can see more clearly how the last wave period in Fig 1 is associated with different quantum states and also how this affects the global and human minds in synchrony."
Fig 2. The consciousness shift in 2011. (a) day state and (b) night state.
As mentioned earlier, since the night we are entering on the level of the Sixth Wave is so long it was slow in coming and its effects became notable only with the Brexit/Trump vote in 2016 (five years after the shift). Yet, to see the power of the shift we can note that neither the US nor the UK has had a peaceful political moment since. Its respective populations have been sharply divided as to whether to try to live in accordance with the good old days of world dominance provided by the dualist quantum state (a) or adapting to the new state (b), which will increasingly shape the human mind. The Brexit/Trump votes in 2016 essentially reflected a desire by parts of the respective populations to maintain the world dominance (or at least the mentalities that went with it) that had been developed during the past 400 years. These elections meant reactions against, rather than an adaptation to the quantum state (b). Hence, rather than adapting to the new quantum state that called for global equality those supporting Brexit/Trump wanted to go back to, or maintain some ego-boosting dominance on the part of their nations. Such reactions to a cosmic quantum state (opposite to what the new quantum state calls for) have happened before in history. It is a quite natural reaction since especially people of high age for a long time have had their mentalities shaped by the previously ruling quantum state and it will reasonably take some time to adapt to the change that has taken place at a cosmic level. Yet, such attempts to block the course of history in accordance with the 6th Wave can not be long-lasting."
In addition to this constant state of political distress in recent years, the West has been further weakened not only by the global pandemic especially affecting the Western hemisphere (see Fig 3), but especially in the US also by very serious consequences of global warming such as forest fires and strong hurricanes. All of this is connected to the cosmic consciousness shift I just mentioned as the duality of the quantum state (a) also created a mentality of dominance versus nature in the human beings and the new quantum state (b) no longer supports this. Overall this has created what seems like a negative spiral for the West that can hardly be broken by the mind set of (a). What might have seemed right before 2011 no longer applies in the same way. Hence, unless we want the negative spiral to continue downwards we will have to adapt to the cosmic quantum state (b), which mandates a mentality of non-dominance."
In this perspective the Covid19 pandemic with which I started the article may be placed in another context, that of the shifting states of consciousness. The map in Fig 3, showing the total number of deaths associated with Covid19, for instance clearly shows how the Western hemisphere has been much harder hit by the pandemic, something which would be consistent with the turning off the light of this hemisphere shown in the quantum shift in Fig 2. I have then suggested that many of the Covid19 cases are strongly conditioned by the shift in consciousness that began in the cosmos in 2011, and especially in the extraordinary amount of stress that this has resulted in on the political level in the West. (PTSD, the President Trump Stress Disorder, caused by incessant tweeting has obviously added to this stress by creating much widely spread discord). The fact that for most people the quantum shift and its nature is unknown I believe has added to the state of confusion and hopelessness. Yet, here the reader may have to decide if he or she agrees with me that psychological or existential factors may have contributed to the Covid19 disease."
Covid19 deaths world 2020
Fig 3. Total accumulated deaths (October 23, 2020) attributed to Covid19.
As an alternative explanation to the East-West discrepancy many have suggested that China created the virus and is to blame for the pandemic and the harm this has done to the West. On my own part I am open to the possibility that the virus was manipulated and had leaked from one of their labs (such events are not uncommon anywhere in the world), but the suggestion that it could have been designed only to infect the genetically diverse population of the Western Hemisphere and essentially save the East seems absurd to me. It is not conceivable that anyone could create a virus with an address label. What seems like a better explanation to the enormous difference between how the Western and Eastern Hemispheres have been affected is that the East has not experienced the turning off of the light that the West went through in 2011 (Fig 2) and so have not suffered the same existential shake-up."
Let me now return to the question whether the pandemic will disappear if Trump loses the election. I am not here trying to make the case that Trump has mishandled the pandemic. Overall, he probably has and at the very least he has quite consistently downplayed it. Thus, if he loses and the pandemic goes away it would seem as the incoming president handles it better. Obviously Trump’s political enemies will create such a narrative and with some right. Yet, I should emphasize that this is not the case that I am making here. What I am saying is instead that if Trump loses it would reflect a significant mentality change among the American electorate reflecting an adaptation to the new quantum state (and the new world that will go with it) and it is such a mentality change, and its consequences for human health that I suggest may cause the pandemic to go away."
It could very well be that the current (October 2020) surge of Covid19 cases in the US and Europe (and the Western Hemisphere in general) is linked to the stress especially of the US elections, which by themselves may have consequences for the spread of the pandemic. Few could deny that these elections will be critical for the future of the world. What I am suggesting is that if Americans surrender to the shift in quantum shift and accepts a role of non-dominance in the world it may also have consequences for the spread and severity of the virus even to the point that it would relatively soon go away in the event of such an outcome. This would be linked to Trump losing the election as Trump in fact explicitly idealizes the idea of domination. The Spanish Flu in 1918-1920 seems to have suddenly miraculously come to an end. Possibly this happened because herd immunity was attained (nobody knows for sure), but possibly it was because the malaise associated with the end of World War I had come to pass. Maybe now a malaise will similarly come to pass."
When we look at Fig 1 and the Western dominance in the 7th day (1617-2011) we may primarily think of dominance in the form of colonialism and other expressions on a global scale. Yet, as we can understand from Fig. 2 this mentality of dominance has also been expressed in domestic politics and individual relationships. What this means is that the very same mentality of dominance in the time period 1617-2011 also manifested for instance in slavery, the near-extinction of Native Americans and the suppression of women and emergence of huge economic inequality in the then victorious Western powers. (I should say here that the reason that I am here bringing up only negative aspects of US history is not because it reflects my overall assessment of the country, but because it is the very purpose of a night to deal with what was not accomplished in the preceding day and so these aspects need to be highlighted)."
As expected from an adaption to the new quantum state (b), we now after 2011 can also see how such abuses of power and their current-day expressions begin to be reversed. Since 2011 when the shift into a new quantum shift took place the United States has in fact begun to export a new mentality consistent with a non-dualist quantum state. Examples of this are the #metoo movement, Black Lives Matter and Bernie Sanders campaigns against economic inequality. That this happens at this particular time is not an accident. These are movements that are consistent with the new quantum state (b) and are thus examples of phenomena that will become prominent over the next 400 years. Yet, the old quantum state (a) still retains an appeal much because there is a light in it that creates its duality and this is partly why many will still want to go back to it. This is the reason that both Brexit and Trump among certain groups of people have gained an almost religious status. Paradoxically, however trying to resist the new quantum state and recreate a world of one that has gone out just makes the situation worse and it is in such resistance to the cosmic plan that the greatest risk currently lies."
Most of the above are just rational consequences of Macrocosmic Quantum Theory, but the proposal that the pandemic will go away if Trump loses the election is a hypothesis, which obviously is wildly outside of the confines of established epidemiology. Yet, it seems to me that no one has any better explanation to how the pandemic has spread the way it has both in space and over time and the reader will now what it is based on. For this reason, and increasing evidence that there is a very direct relationship between psychological factors and the health of the immune system I think such an outcome is possible if Trump (and other proponents of maintaining dominance both domestically and in the world) loses this election. If the pandemic relatively soon miraculously disappears I think this would be the explanation."
NEW BOOK RELEASE Just out 11 June 2020
The Pineal Gland, Multidimensional Reality, and Mayan Cosmology
Quantum Science of Psychedelics
“...Calleman’s Quantum Science of Psychedelics shows how psychedelics are a doorway into all nine dimensions that synchronize us with the highest states of consciousness. Calleman’s entirely new theory of evolution based on macrocosmic quantum science has profound implications for our understanding of altered states. His book is a must-read for students of the Mayan calendar and the quantum and for anyone seeking healing and advanced awareness.”
Barbara Hand Clow, author of The Mayan Code
Carl Calleman's new Book is a groundbreaking exploration of how psychedelics and quantum science are vital to understanding the evolution of consciousness and reality
• Explains why altered states of consciousness exist, how they work, and why psychedelics have the effects that they do
• Describes how quantum waves, rather than the DNA molecule, have been the driving force behind biological and historical evolution
• Explains how psychedelics interact with the human mind to create altered states that may further the continued evolution of consciousness
Dean Radin PhD
Chief Scientist, Institute of Noetic Sciences
Professor, California Institute of Integral Studies
Author, Supernormal, Entangled Minds, and other books.
Dr. Calleman's book
Available on Amazon
United Nations World Oneness day of October 24, 2017
Dear Friends,
"This year, the World Oneness day, which is celebrated every year at the United Nations day of October 24, has the distinction of falling on the very maximum of the peak in the Ninth Wave.
9th-calc_10-10-17 day 10
While there are many ways of honoring this intention of Oneness, to tune into the Ninth Wave is the most powerful. Most importantly this makes it part of an ongoing practice consistent with the long-term destiny of humanity."
Disasters and Non-duality –The Consequences of the Eighth Wave
going into a NIGHT on September 27, 2017
*** It is important to read all of this article to get Carl’s message of hope. ***
September 26, 2017 | By: Calleman | Article by Carl Johan Calleman, Ph.D.
"The recent time has seen an unusual amassment of natural disasters, especially in the Western Hemisphere, which seems to coincide with a political leadership crisis in the United States that is expressed on a daily basis. Is this timewise correlation of dramatic and sometimes catastrophic events really a coincidence or is there, as I have discussed in an earlier article ( an underlying reason in that the Plumed Serpent (that is to say the Sixth Wave) now again has left this Hemisphere? According to the ancient Maya the wave movement of the “Plumed Serpent” was behind the changes in consciousness that may express themselves in many different ways.
Hence, when the 'plumed serpent' also known as Quetzalcoatl, arrived it brought prosperity. On the other hand, when the Plumed Serpent left a civilization this could result in the downfall of a dynasty, warfare, natural disasters or religious change, which could all lead to its downfall. To them it was like the Plumed Serpent, which we can now only understand meaningfully as a creation wave, carried the spirit of civilization and so to the ancients this was an important entity to honor. Most people today would probably think of this as just a story, but what if it is true and the ups and downs of civilizations follow such a wave movement? Regardless, it seems we are no at a time where different forms of disasters are conspiring against us, whether those are political, military or natural."
Another explanation to the current state of the world that has been proposed is that an alignment in the constellation of Virgo on September 23 would be a manifestation of Revelation 12, which would connect our momentous times to the Christian scenario of the Apocalypse, or according to some, to Sitchin’s purported Nibiru planet. Yet, of course, the world did not come to an end on this date and no Nibiru planet has appeared or will ever do so. The reason the world will not come to an end on a specific date is that the divine creation process – as we may understand it from the Nine Waves of Creation – does not have any built-in end date. Instead it is designed to continue endlessly in accordance with these waves. This however does not mean that I think the prediction of the September 23 advocates – that we as a species has now really come to a critical point in our evolution – was completely unfounded or that we would somehow be immune to the effects of dark ages defined by the waves of creation. There is much historical evidence that nights in the Mayan calendar carry different kinds of hardships depending on what waves we are looking at. The valleys in the Sixth Wave shown in Fig 1 coincide with such dark ages that have brought major civilizations down and I think it is an illusion to think that the current global civilization dominated by the West would be immune against this."
6th wave of Creation
Fig 1. The rise and fall of major human civilizations as a function of the peaks and valleys of the Sixth Wave
(wavelength – 788 years).
Given our position in the wave movement in Fig. 1 after 2011 (when this turned into a valley or night) I thus think that we have very serious reasons to consider that the Plumed Serpent (the Sixth Wave) is now abandoning the Western Hemisphere as it has done may times before. Without sensationalizing or looking blindly at any singular date, I think we should consider the upcoming turn into a night also of the Eighth Wave on September 27, 2017 (Fig 2), as a point in time when the interference pattern of the waves create a critical shift. The reason is that the approach to this shift has been so relatively dire because it has taken place on the top of waves, the Sixth and Seventh, that have already turned into night phases."
2011 shift waves 6,7,8,9
Fig 2. The Interference pattern of the higher waves of creation after the shift in 2011. Note that the Sixth Wave at the bottom is also going downwards, but because the downturn of this wave is so slow it is not easily visible in the scale of this diagram.
At first sight this shift in the Eighth Wave may not seem to be such an important event considering that this Wave (with a much higher frequency than the Sixth and a wave-length of only 720 days) has turned into a night several times before including more recently in 2013 and 2015. Yet, the context of the current shift is now different coming after the Brexit/Trump votes in 2016 that marked the beginning of the tangible downturn of the Western powers. This shift into a night of the Sixth Wave in 2011 (See fig 3), which because of its low frequency (wavelength 788 years) took until 2016 to clearly manifest, is affecting not only East-West relationships, but also through resonance the relationship between the two brain halves."
6th wave turns to night
Fig. 3. The effects on a global and a personal level of the Sixth Wave turning into a night. The dominance of the Western hemisphere as well as of the left brain half (for better or for worse) is coming to an end with the new mind frame of the night that was activated in 2011.
Since the human mind then is directly influenced by the global mind and what happens on the geophysical level (as was clearly evidenced in my book The Global Mind and the Rise of Civilization) we should not be surprised if also political mindsets are influenced by this powerful shift on the level of the global mind. Depending on with what waves people have created resonance (which determines the filter through which they perceive reality) they will come to different conclusions as to what is happening and what should be done."
As we can see in Fig 2, where all the waves are shown at the current time running in parallel, the turn into a night of the Eighth Wave means that for the next 360 days following September 26, 2017 (up until September 22, 2018) all the creation waves except the Ninth Wave will remain in dark “valley” states. I, for one, think that we have strong reasons to believe that destruction, both politically and naturally caused, will even now intensify and many will find it hard to see a meaning with life. There is a real risk of a compounding of effects of the nights in several coinciding waves may even further add to the series of real or potential political and natural disasters that recently we have seen accelerating."
To me, the interference pattern created on September 27 has an apocalyptic ring to it where only those that have developed resonance with the Ninth Wave and unity consciousness will have wind on their back and be guided forward into the future. It, in fact seems as if people now are going in different directions. One group that take active steps towards unity and another that stays with nationalism, racism and sexism or other ideologies of separation. What determines what group you will belong to is what wave you resonate with. The idea that humanity has come to a decisive crossroads has been proposed many times before, but I think it is only now that this is happening."
I do not think that there will be much relief for humanity until 2031, when the Seventh Wave turns into a day. In this time period the Western Hemisphere is likely to be primarily hit as the light that supported its dominance up until 2011 has now gone out (Fig 3). We do not know exactly how this downturn will play out. Yet, history (or if you like the Plumed Serpent) makes its way in one way or another in accordance with the shifting phases of the divine waves, but one thing we know is that the more people are stuck in mindsets of duality the harder it will be."
How then can such a potentially destructive era, which I consider likely to come, be understood in any kind of positive way? Many will of course think of such a destructive era as a punishment by God or as a karmic response to earlier actions, but would this really be the purpose of the intelligence that created this universe? Could it instead be the only way of creating a world of equality? If we look at it more objectively – as the direct result of an interference pattern of waves influencing the human mind – then we may have an alternative way of seeing this era as an interference pattern supporting a development towards equality (or even a golden age), which is meant to manifest as humanity enters a new day in the Seventh Wave in 2031. Why would this be? Essentially because as the Eighth Wave now goes into a night there will be no wave favoring the duality of the mind or a dualist perception of reality. Moreover, the only non-dualist wave that will be oscillating into a day mode is the Ninth Wave, which brings enlightenment and a unity frame of mind. So even if at the visible level there may seem to be many disasters (both political and natural and possibly military) because of the compounded nights the interference pattern causing these disasters will paradoxically at the same time be the very interference pattern that creates a path towards unity for those that develop resonance with the Ninth Wave."
This would explain not only that many spiritual traditions of ancient origin have foreseen a return to unity, but also why the path there, such as for instance in the Christian Apocalyptic scenario would go through a series of catastrophes. The unity consciousness of the golden age (or the New Jerusalem if you prefer biblical language) in fact presupposes equality between all nations and individuals on a planetary scale. For this reason the path to this will require that humanity goes through a period of mental non-duality, which is what the upcoming interference pattern provides. History has shown that human beings for the most part do not voluntarily give up a state of dominance they have had over others and it is exactly the dualist mind of the Sixth Wave days (See Fig 3a) that in the past has legitimized dominance. Then in order to prepare for equality and the manifestation of unity consciousness an interference pattern of non-duality will have to dominate the minds of the human beings for a relatively long time and this is what I am proposing that we now about to face."
The duality of the Sixth Wave mind in its days (fig. 3a), is not only what led to the Western dominance of the world (fig 1). This duality also led to all forms of dominance that some individuals have had over others based on race, bloodline, gender, or religion, a dominance that has manifested itself in the economic and political arenas. If it for instance seems that at the current time there is a revival of racism in the United States this is because the new mind (3b) clashes with the racism that was inherited from the duality of the previous day (3a). Hence, if racial or gender inequality, and especially economic inequality, has been caused by the duality of the Sixth Wave mind, then the latter will be a constant impediment to the path towards a golden age of non-duality. However, also nationalism has come out of a mindset of dominance fostered by the duality of the Sixth Wave and is by itself a reat impediment to peace. Hence, the manifestation of a golden age will require that the majority of people transcend not only racism and sexism, but also transcend the subordination to a national perspective and become able to embrace one – humanity first! – that is global."
"Many of the effects of the creation waves and their interference pattern were discussed in my book The Nine Waves of Creation. The shift on September 27, 2017 means that the shift into a night of the Eighth Wave, (which albeit it has a feminine streak is also a dualist Wave), will add to the non-duality of the global mind. This is why, paradoxically maybe, darkness and non-duality will go hand in hand on many very different levels. The conclusion is that while I expect the following thirteen years to become very destructive (especially to the Western Hemisphere and institutions based on the dominance of the left brain half) this era will at the same time prepare for a world of global solidarity based on equality that after 2031 truly will become a possibility. For those that want to be part of such a world, awareness of and development of resonance with the Ninth Wave and uncompromisingly following the guidance and visions provided by this are pre-requisites. From the perspective of the unity consciousness that will be developed in the meantime many problems that seem intractable from a dualist mind-frame will naturally come to find new and enlightened solutions."
The Mayan Calendar did not End in 2012
August 14, 2017 | By: Calleman | Article by Carl Johan Calleman, Ph.D.
I have found that a large number of people still believe that the Mayan calendar ended in 2012, not only those that took a casual interest in this calendar, but also some of those that were quite engaged. Possibly partly because of disappointment everyone in the latter group did not fully assimilate the discussions that did follow upon 2012 in order to understand what had happened. I should already here recognize, as I did in an article written on my blog on December 31, 2012 and later in my book The Nine Waves of Creation that I have also myself contributed to the confusion regarding the so-called “ending” of the Mayan calendar. At least I have however corrected myself and taken responsibility for my part in this. Yet, that a whole field of researchers gets it wrong is nothing unusual in scholarly endeavors. Moving through such errors is what the scientific process is all about. What is important is however that the errors are corrected so as to create a new opening for understanding and this is what I am going to discuss in this article. For some, this may seem like a somewhat technical article, but on the other hand the topic remains of critical importance for our understanding not only of the past but also of the future of humanity. Thus, I think it is worth diving into the matter. The Mayan calendar itself was not wrong. A large number of researchers and students of this were however wrong on a critical point and I am not talking about silly Hollywood disaster moviemakers, but of the serious people. Because of this error in our understanding of the Mayan calendar we have to make the appropriate corrections if we are to arrive at a new and expanded understanding of this.
To begin with we may wonder from where the idea came that the so-called Long Count would come to an end. When people used to talk about the “end” of the Mayan calendar they were actually referring to this particular calendar, the Long Count, which is a wave form that have been developing in phases called baktuns of 144,000 days each starting in 3115 BCE. (We will here leave aside the question of when this began exactly and in what context the Long Count existed to other waves). This was the highest frequency creation wave followed by the Ancient Maya, which is not surprising since at the time no wave with a higher frequency had been activated. Hence, to them the Long Count was their main chronology to which they related especially their own dynastic histories. A critical question regarding the purported end of the Long Count is then how many baktuns this was supposed to consist of. Is the Long Count constituted by thirteen baktuns, of twenty baktuns or is it in fact endless going into the future never to end? If it would have been thirteen baktuns long then it would have ended in 2011 (or 2012) and the world would have experienced a profound discontinuity as the Long Count conceivably would have been followed by a new cycle. If it would be twenty baktuns long then its end would generate a discontinuity in 4772 CE, but if it was in fact endless there would never be any discontinuity or end of time generated by the Long Count. There would only be an ongoing wave movement going up and down with its shifting phases (baktuns). Hence, there are three different possible solutions to the problem.
Although the contemporary Maya hardly argued that their calendar would come to an end, they had long since (about 1000 years ago) ceased to follow the Long Count or any of the other higher waves of the calendar system. Hence, modern scholars had to rely on what they could gather from the archeological findings about the calendar system of the ancient Maya. Quite a few archeologists would argue that the Long Count would be limited to thirteen baktuns and this included in the very influential book The Maya written by Michael Coe. He originally placed the end of the Long Count after thirteen baktuns to 2011, but later changed it to 2012. This shift date was then picked up as being of significance by some early pioneers giving the Mayan calendar meaning, such as Peter Balin and Frank Waters. What had a much greater impact was however when Jose Argüelles wrote The Mayan Factor and said about December 21, 2012: Then it shall be ready. The unique moment. The moment of total planetary synchronization, on the beam, will arrive – the closing out not only of the Great Cycle, but of the evolutionary interim called Homo sapiens. Amidst festive preparations and awesome galactic-solar signs psychically received, the human race, in harmony with the animal and other kingdoms and taking its rightful place in the electromagnetic sea, will unify as a single circuit. Solar and galactic sound transmissions will inundate the planetary field. At last, the Earth will be ready for the emergence into the inter-planetary civilization.
To Argüelles there was obviously no doubtthat the Long Count was limited to thirteen baktuns and that the date this would come to a close would generate a profound discontinuity as described above. Yet, few people would probably agree today that the above was a good description of what happened on December 21, 2012, which by all standards was a very uneventful day. Something must have been wrong here. Argüelles himself however had passed away about a year earlier and we do not know what he would have now said about it. As far as I know none of his followers has taken up the mantle and addressed the issue. This again raises the questions as to whether the Long Count was truly thirteen baktuns and if it was not, could this explain why there was no such discontinuity on December 21, 2012 as Argüelles had envisioned. Could it then be that the idea that most archeologists had been promoting, namely that the Long Count was limited to thirteen baktuns was in error? Certainly, it was on their information that Argüelles had based himself.
I should here say that Argüelles was not alone among more visionary researchers to hold the idea that the Long Count was limited to thirteen baktuns. He just came out first and was the most influential proponent of this. I myself also essentially believed the archeologists to have been right and so did the late John Major Jenkins, who gave one of his books the subtitle The true meaning of the Maya calendar end date. To our defense, and to the defense of the archeologists, there were in fact some very good reasons to believe that the Long Count should have been limited to thirteen baktuns. One is the prominence of the number 13 in the sacred tzolkin calendar of the Maya, which is a microcosmic matrix for all the waves including the Long Count. Another the fact that there are many inscriptions from the ancient Maya describing a pre-Long Count lasting from 8240 BCE to 3115 BCE, which was then in fact thirteen baktuns. If so, why would not the Long Count similarly be limited to thirteen baktuns. Indeed, if the Long Count would not be thirteen baktuns similarly to its preceding creation wave, this would be truly very mysterious. It would imply that the Long Count would not be a cyclically repeated phenomenon of identical duration as astronomical phenomena are. If the pre-Long Count and the Long Count did not have the same durations then they would have to emanate in the activation of waves in the quantum field and at the time for many this seemed unthinkable.
From my own perspective I had always known that there were ancient Mayan inscription dates deep into the future, but had chosen to ignore them (I take it both Argüelles and Jenkins did the same). After all, most disciplines of study display some anomalies, outliers that cannot be understood from the established context and I thought those dates were examples of such. I thus only started to realize that there must have been something seriously wrong with the established idea about the Long Count when I heard from Dr Mark van Stone that there was in fact no evidence from the ancient Maya saying that this would be limited to thirteen baktuns. On the contrary, there was evidence from ancient times that they believed this to go beyond thirteen baktuns. I quote from van Stone’s book 2012 – Science and Prophecy of the ancient Maya: There are bits of evidence suggesting that although the Maya reset the Long Count at the last they don’t plan to do it in 2012. At least some of them expected no more recreations after this one. After would come and, and on up. (We are in other words now in the fourteenth baktun of the Long Count and not in the first baktun of a new cycle). The evidence for this comes from Yaxchilan, Palenque and Tikal, places that were at the leading edge regarding calendars in the ancient Mayan world.
For example, at the Temple of the Inscription in Palenque there is a relief describing different (presumptive) celebrations of the anniversaries of the coronation of Pacal. One of them points to a date (1 Pictun, 0 Baktuns, 0 Katuns, 0 Tuns, 0 winals and 8 kin) in the future, which would point to the year 4772 CE. The point is not here only that this date shows that the Mayan Calendar in the view of the Palenqueans continued well beyond our own time. This notation also shows that the Long Count was not expected to reset in 2012, in which case it would have been described as (deducting 13 baktuns from 1 Pictun, which corresponds to twenty baktuns). In the ancient view the Long Count would thus not reset with a discontinuity on December 21, 2012, but would go on to at least twenty baktuns and maybe the added 8 days was a message that it would go on endlessly, which is what I believe.
This was profound knowledge and meant that the whole idea of a discontinuity at the end of the Long count lacked a foundation in ancient sources. The Mayan calendar did not end and there was no indication that a new cycle would begin in 2012. Starting with my article Some New Reflections on the Mayan Calendar “end” date from December 31, 2012 I began to correct my earlier view and develop one where there was no end to the wave movement of the Long Count (or any of the other eight waves of creation). The consequences of these discoveries by van Stone were profound as they meant that logically speaking all the waves of the Mayan calendar system, not only the Long Count, should continue into the future. In principle we can then still understand what is happening in the world from the interference patterns of these different waves of the Mayan calendar, although they may not always be easy to interpret.
While obviously John Jenkins must also have been surprised at the uneventful nature of December 21, 2012 date, it was his long time supporter Geoff Stray who addressed the issue of the duration of the Long Count in an article of July of 2013 called Mysteries of the Long Count which he sells on his web site. This is a good article and describes in much detail what views different Mayanist researchers in the past had held regarding the duration of the Long Count. His conclusion is essentially the same as mine, except that while I clearly say that the idea of a thirteen baktun Long Count was wrong, he merely entertains this as a possibility. While he criticizes the archeologists for not having gotten it right, he does not take a clear stand for twenty baktuns or more himself. Since John Jenkins endorsed Stray’s article we have reasons to assume that it reflected also the view of John and in fact may have been what ultimately defined his legacy regarding the Long Count.
But why, we may ask, did Stray not take a clear stand regarding the duration of the Long Count to say that it was actually twenty baktuns (or more). I think he had strong reasons to avoid this, We should then remember that Stray before the shift would say that on the very day of December 21, 2012 there would, because of the “galactic alignment” of the midwinter sun, be a burst of DMT (dimethyltryptamine) affecting many people, which would generate a widespread transformation. Again, this did not happen on this date and so for him there was little reason to claim that the Long Count would have been limited to thirteen baktuns. Why then did he sit on the fence about it and did not clearly recognize that the Long Count is at least twenty baktuns? This is explained by the fact that this would make irrelevant all what he had been saying for so many years about a galactic alignment and the precessional cycle’s role for this. All of this would go out the window because if indeed the Long Count was not limited to thirteen baktuns the shift on December 21, 2012 would not have had the meaning as an end date as Jenkins had proposed and so the whole idea of a galactic alignment would be exposed. There was in fact no ancient Mayan text mentioning anything like a galactic alignment or a 26,000 year precessional cycle that this was linked to. These were ideas made up and believed by modern people.
It should here be mentioned that, based on the idea of a thirteen baktun duration of the Long Count, many of Jenkins’ followers had come to believe in a precessional cycle of 26,000 years. They thought this was the basis of the Mayan calendar because it seemed that the duration of five Long Counts, 5 x 5,125 = 25,500 years was so close to the duration of this astrological cycle. Consider then what would happen to this correspondence if the Long Count (which after all Stray implies) would be twenty baktuns. Then five presumed Long Count cycles would have a total duration of 4 x 5,125 + 7,900 = 28,400 years, which would not jive very well with the precessional cycle. The whole theory that the Mayan Long Count would be based on the precessional cycle would then also be exposed at least among critically thinking people. This is presumably the reason that Stray does not take a stand regarding the duration of the Long Count as either way – whether the Long Count is thirteen or twenty baktuns – the idea of a precession-based galactic alignment in 2012– would be seen to lack any foundation. Despite Stray having written this article and Jenkins endorsed it you however still find people who believe that the Mayan calendar was based on the 26,000 year precessional cycle and that this ended on December 21, 2012. Based on Stray’s own arguments this can however not possibly have been the case. Regardless of the fact that Stray does not draw the full consequences of his article it is however well worth reading.
Someone may then wonder why the ancient Maya wrote the Tortuguero Monument No 6 inscription about the recent shift if this was not indeed an end to the Long Count. And yes, it is actually quite astonishing that the Maya some 1500 years ago wrote an inscription pointing to a very profound shift in our own time. They must then have seen that this indeed was a very significant shift. And here comes what may be the most difficult for many to understand: The October 28, 2011 date (or as some believe December 21, 2012) was not an important date because it was an end date, but because it was a synchronization date. The meaning of Bolon Yokte Kuh (The nine step divinity – Bolon means nine in Mayan language) appearing in his full regalia (as the inscription reads) is important because for the first time in the history of the universe all nine waves (the full regalia of the nine level god) had been activated by March 9. 2011. They were actually for a moment in time in October of 2011 also synchronized and in the same thirteenth phase at this particular shift date. This was indeed a highly significant shift opening up for a new era determined by a new interference pattern between different waves that we are living in now. Yet, in this may be the main point of this article: this had nothing to do with the Long Count coming to an end.
9 wave synchronized phase shift
Fig 1. The Long Count did not come to an end on October 28, 2011 (or December 21, 2012). What happened on this date was that all the nine waves for the first time in the history of the universe were all synchronized and went into a phase shift together. Bolon Yokte Kuh the nine level “god” now appeared in his full regalia.
So over all, all of the major players (including myself) who were involved in the Mayan calendar before 2012 were in error when it came to the idea that the Long Count would then come to an end. We were right however in that there was a very significant shift as heralded by the Tortuguero monument. This shift was however not based on an ending of any wave, but on the activation of all the nine waves of creation and their synchronization, which are now (including the Long Count) continuing to run in parallel shaping the destiny of humanity. The new interference pattern between these waves creates both challenges and opportunities. The opportunity essentially lies in creating resonance with and following the guidance of the Ninth Wave, which is the one that is manifesting the unity consciousness that will shape the future of humanity. Despite what many seem to believe the Mayan calendar did not end in 2012 (or 2011).
The Ninth Wave of Creation and the Solar Eclipse
April 17, 2017 | Archives for Calleman | Article by Carl Johan Calleman, Ph.D.
Today on August 4 begins another wave period of the Ninth Wave giving guidance and direction for all those who are committed to the destiny of humanity as one of unity consciousness. Each such wave period is a total of 36 days long that is divided into a DAY of 18 common days followed by a NIGHT of 18 common days ( The beginning of such a wave period is an important time for everyone to light a candle as it marks a shift in human creativity. This is just as true for August 4 as it was for May 24th and June 30th or will be for the upcoming September 9, 2017. The peaks (also called DAYS) of the Ninth Wave are times when the light in the movement towards unity consciousness shines through. This is true especially for those individuals that are committed to bringing humanity in this direction, but also on a collective scale it will over time become increasingly visible with consequences on a national or global scale. Yet, we are early in the development of this wave that was activated in 2011 and it is not visible to everyone. Since August 4 is the beginning of a DAY of 18 days and this wave is supportive of a movement towards peace and unity it is an important opportunity for those seeking such a direction to listen attentively to any guidance that now may come forth regarding our individual courses in life.
8-8-15 9th Wave shift to night
The shift to a NIGHT in this wave period may however be what is interesting to most as it happens on August 22, and hence follows directly upon the full solar eclipse that is visible in parts of US on August 21. The solar eclipse in other words initiates a NIGHT of the Ninth Wave. This full solar eclipse is according to most sources the first to happen in 99 years in the United States and since it is exclusively visible in this country, it has mostly been interpreted to be linked to the fate of this nation. To indigenous peoples solar eclipses are overall considered as negative omen and by medieval astrologers they were typically believed to herald the downfall of a king or dynasty. It is then natural that many current day astrologers have suggested that the event somehow pertains to Donald Trump. After all this solar eclipse is limited to the United States and the political crisis in the Trump administration has become a constant. By the standards of political normality any period so far this year could be a qualified as a downturn, but at the current time the political chaos is also intensifying. To predict a further downturn for the Trump administration can hardly be said to be a very original idea. Whether there is a solar eclipse or no solar eclipse this is still happening.
I have however argued that Donald Trump ultimately does not cause this erosion of the political system, but that rather he may be the person who is best suited to manifest the downturn that is inherent in the shift into a NIGHT in the Sixth Wave. This energetic downturn primarily hits the Western powers and so in particular the US and the UK and for this reason an erosion of the powers of those two nations are running in parallel. The way it manifests in the UK is that now after the activation of Article 50 (calling for the withdrawal of the UK from the European Union) it has become quite clear that most foreign banks will move their European headquarters away from the city of London. Given that its financial centre has been what has remained of the British Empire this is not a small thing, and is a marked part of the downturn of the West. Similarly, in many regards, such as climate politics, many other nations are starting to count the US out, leaving the US increasingly isolated at least from its traditional allies in Europe. In this perspective the impaired standing of the US and UK in the eyes of the rest of the world is not really caused by Trump and Brexit. Rather Trump and Brexit are serving to manifest what in one way or another was going to happen anyway because of the fact that the Sixth Wave has turned into a NIGHT. Fig 2. shows how in the ancient view of the Mesoamerican peoples the Plumed Serpent (the sine wave) would bring civilizations as peaks began and abandon them as the valleys began.
6th Wave Creation
Fig 2. The rise and fall of significant civilizations as a result of the movement of the Sixth Wave.
Note that the seventh of its peaks for the first time in human history generated a global civilization.
It should here be pointed out that overall the Sixth Wave (with a wave period of about 800 years) provides a much larger perspective than the Ninth Wave (wave period of 36 days) and even more so compared to a solar eclipse on a singular day. The Sixth Wave actually among many other things describes the chief development of civilization going back to its early beginnings in Sumer and Egypt about five thousand years ago (See fig 2). So while the downturn of Western dominance is a phenomenon that will manifest over the next four hundred years, the higher waves such as the Ninth Wave will still play an important role for what is happening in our most immediate future. The high frequency of the Ninth Wave gives it a high power even if it has not yet been active for a time comparable to that of the Sixth Wave. Hence, while it can be considered a certainty that the Western powers will not continue to dominate over the rest of the world for the coming 400 years, it is not certain how and exactly when the global balancing process setting an end to this will occur. The solar eclipse may herald an event along such lines, but my own guess is rather that it will temporarily strengthen the old order of dominance of corporations and billionaires symbolized by Trump. After all, the solar eclipse marks the beginning of a NIGHT in the Ninth Wave and those are not the periods that are the most conducive to unity consciousness.
shift of individual minds shift with 6th Wave day to night
Fig 3. The shift in the global and individual minds as a result of the shift from a DAY to a NIGHT in the Sixth Wave.
Regardless of what scenario will play out in the United States and elsewhere in the couple of upcoming wave periods in the Ninth Wave, its fate is primarily determined by the Sixth Wave (Fig 2) and a balancing in power between the Eastern and Western Hemispheres is now taking place (paralleled on the individual level by a balancing of the left and right brain halves. See Fig 3) that will tend to weaken it. What is not so clear is how disruptive the transition to balance on a global scale will be. This depends largely on to what extent people develop resonance especially with the Eighth and Ninth Waves of creation, which are the waves that are now leading humanity to manifest its destiny on a global scale. Naturally, this in turns depends on how many people actually intend to establish a balance that ultimately can lead to peace,
So even if the upcoming 66th period of the Ninth Wave (beginning August 4 and including the solar eclipse) is likely to play a crucial role for defining the future fate of the United States, I believe the 67th is the one that will be decisive for this. The latter Ninth Wave period begins with a DAY on September 9 and turns into a NIGHT on September 27, 2017. The reason the latter is likely to be an ominous day is that the Eighth Wave then simultaneously turns into a NIGHT and will remain in this state for all of the following 360 days. For this whole time period until September 22, 2018 it will only be the Ninth Wave that provides any spiritual LIGHT to humanity. And so, even if this is an energetic scenario that would be ideal for creating balance and equality on a global scale (and hence prepare for a more balanced and peaceful world) it is likely to become a very difficult process to accomplish this. While the light of the Eighth wave will come back on September 22, 2018 it will go out again in another 360 days and the stabilization of a world in balance may not be apparent until 2031 (when the Seventh Wave turns into a DAY). Reasonably then the upcoming thirteen years will be a challenge to us all, since the waves influence the mind of all levels of the universe, galactic, heliospheric and planetary as well as our own human. The foundation for lasting peace after 2031 will be built during this time ahead, but this balancing will largely be effected by turning out the light from the dualist waves (See Fig 2) so that the transformation may come to take place because of a potentially destructive darkness. Following September 27, I believe we will then have to adapt to an interference pattern of the creation waves that more than ever before shakes up our ingrained beliefs as to what we are here for. We will have to learn how to follow the only wave that will provide light, which is the Ninth Wave.
Carl Johan Calleman, Ph.D is the author of several books based on the ancient Mayan calendar system including the most recent The Nine Waves of Creation (Inner Traditions, 2016).
The 36 Day Period of the Ninth Wave –
The Purpose of Celebrating April 18, May 24, etc
April 17, 2017 | Archives for Calleman | Article by Dr. Carl Calleman
The Ninth Wave is the creation wave that was activated on March 9, 2011 and has continued to run ever since. By all accounts it will continue to oscillate as a sine wave without end. It is the highest frequency wave of the Mayan calendar system and has a complete wave period of 36 regular days. A new such wave period, which was the 63rd wave period since its inception began April 18.
9th Wave 64th-period on May 24 2017
Next, the 64th period will start on May 24, a day that for instance will be marked by San Bushmen in Cape Town . Then, June 29 begins the 65th wave period that will be marked by international ceremonies. In this article I will attempt to explain and describe why such global events are being created and why it is part of our evolutionary purpose to follow the Ninth Wave. Internet resources that describe this may be found at , or
I may begin by pointing out that there are different kinds of shift points in the Mayan calendar system. One type is when a significant change in the mentality of a segment of humanity (usually not all people, but a small energy sensitive minority) decisively changes once and for all. Examples may be the Harmonic Convergence in 1987, which marked the beginning of the pre-Wave to the 8th Wave or the Conscious Convergence in 2010, which marked the beginning of the pre-Wave to the 9th Wave. Most important of such shifts may be the concurrent shift in several different waves on October 28, 2011, which has provided the background to the world we are living in today. Such events may be said to have the purpose of celebrating a shift created by the new energies of the Waves and are truly to be regarded as one-time events as the world afterwards will be forever changed,
In contrast, the marking and empowering of the beginning of new days (or peaks) of the Ninth Wave, for instance on April 18, May 24, June 29, and August 4, 2017 etc are not one-time events. They, as well as any other starting point of such a wave period are about creating awareness in yourself and others of the existence of a wave generating unity consciousness and to help you tune in to the frequency of this. This wave is the positive forward-moving wave that was created in the shift of 2011 that leads to the manifestation of the destiny of humanity as a unified species and no one talking about being part of the shift should ignore it.
What this means is that it is especially in the peak periods of the Ninth Waves that projects towards unity consciousness will have wind on their backs and this is true also if your project is simply yourself and your way of being in the world. This also means that you will have to gain and spread awareness of this wave so that you as an individual can gain the necessary guidance for how to be and act in this world even under very chaotic conditions created by lower frequency waves. This guidance is given to you especially during the peak periods of the wave whereas the valley periods are essentially periods of reaction when you have to deal with whatever in yourself is inconsistent with the manifestation of unity consciousness. This insight can only be gained from your own experience and the fact that I or someone else may be advocating creating the awareness of the Ninth Wave has little import if this is not consistent with how you experience the ups and downs of the 36-day wave form. It is only to the extent that you are able to experience your awareness that this wave form fundamentally is what conditions your process towards unity consciousness (or ascension as some would prefer to say) that you will successfully be able to tune in to the frequency defining the destiny of humanity and allow you to be part of manifesting this.
What this means is that the crucial thing is to be able to observe yourself in relation to the wave form and this will come out of learning how to tune in to it. This does not by itself require that you participate in large scale ceremonies or meditations although such events will help alert you to what is happening in the energy field that underlies the material aspects of our existence. It is perfectly sufficient that you light a candle or a sacred fire or face the sun on the first day of a day to mark that for the next 18 days you – as a being committed to the creation of a Golden Age – will have wind on your back. You may also pray for guidance for this time period regarding what steps you should take in your individual projects (and these may be very diverse) to have this come true.
The development of resonance with the Ninth Wave is thus not a one-time event. It is something that will occur over time as you fine-tune your experience of it. This requires marking several of its beginning points and this is not really a process that has an end. Over time you will increasingly become an expression of the Ninth Wave, and if you believe in the return of Quetzalcoatl, or the Plumed Serpent, you will be it. Naturally, it may be argued that there are people that are manifesting unity consciousness without consciously having developed resonance with the Ninth Wave. Nonetheless, such resonance can still be a remarkable aid also to such individuals as it makes them fully aware of the existence of such a creation wave, which is directed towards the future of humanity in unity. To also actually follow this wave with its ups and downs may in addition provide a self-correcting instrument of great importance for those who try to navigate the many large waves that exist in the ocean of creation.
To make an analogy we may look at tuning in to the Ninth Wave somewhat like tuning into a newly established radio station. First of all you and other people need to be informed that this station exists and for this purpose it is useful to stage public events informing about it. Secondly, you have to know at what frequency the station is broadcasting since otherwise you will not be able to hear it. Third, the listener may need to work to optimally tune in to the frequency so that he or she gets a clear signal. The latter is all the more important if there are several other stations that are powerfully broadcasting at competing frequencies that may make it difficult to hear the desired station. This is very much the case regarding the Ninth Wave since there are lower waves that prevent the listener from getting a clear signal. This is the real crux of the matter. Hence even if there are many groups and movements that are talking about Oneness, Ascension and fourth-dimensionality, but unless their work is connected to the Ninth Wave they will not be able to gain the guidance to get there.
So will the world change on days such as April 18, May 24 or June 29? What absolutely will be enhanced during the 18 days following such dates is the potential for manifesting unity consciousness. Already now we may see some steps in the direction towards unity also on a global scale occurring on such dates. You may for instance make note of if certain ideas or persons in power are supported by the days or the nights. This will tell you much more about the future of a phenomenon than political opinions. Yet, nothing changes in the time periods that are days unless people tune into the energy shifts and change, who you are in the world as a result of your resonance with the Ninth Wave. Over time we will then see a strengthening of the process toward unity consciousness in such a way that actions and projects expressing unity are increasingly synchronized also on a visible global scale. Resonance with the Ninth Wave will in other words increasingly become a synchronizing factor for positive projects for the world. Personally, I believe that unless a large part of humanity develops such a resonance we have very little reason to expect that our species will survive. To counteract a potentially disastrous future we must then now urgently develop a collective calling to manifest our Golden Age destiny of unity through resonance with the Ninth Wave. And it can only happen day by day. |
53172290f7af9150 | in Quantum Field Theory
Demystifying the QCD Vacuum: Part 3 – The Untold Story
Although the subtle things that are often glossed over in the standard treatment of the QCD vacuum can be explained as discussed in part 2, there is another, more intuitive way to understand it.
Most importantly, this different perspective on the QCD vacuum shines a completely new light on the mysterious $\theta$ parameter.
To the expert, this perspective will be familiar. It sometimes appears in the literature and therefore the “untold” part in the headline is, of course, a bit exaggerated. However, it took me, as a student, a long long time to find it at all and most importantly a proper explanation that made sense to me.
The standard story of the QCD vacuum uses the temporal gauge. This is not a completely fixed gauge. Time-independent gauge transformations are still allowed. Only this residual gauge freedom makes the whole discussion in terms of large and small gauge transformations, etc. possible.
One may wonder what happens when we analyze the vacuum in a different gauge, where there is no residual gauge freedom. In other words: in a gauge that fixes the gauge freedom completely. Possibly choices are, for example, the axial and the Coulomb gauge.
The interpretation of the QCD vacuum is completely different in these gauges. Most importantly: there is no vacuum periodicity.
In the axial gauge, there is only one non-degenerate ground state. Then, of course, it is natural to wonder what we can learn about the $\theta$ parameter here. At a first glance, the result that there is a unique ground state implies that we have $\theta=0$. However, this is not the case and we will discuss this in a moment.
In the Coulomb gauge, there is only a non-degenerate ground state, too. However, the interpretation of the vacuum structure in this gauge is especially tricky. Most famously, one encounters the famous Gribov ambiguities. These appear because the condition that fixes the Coulomb gauge does not lead to unique gauge potentials everywhere in spacetime. Instead, there are regions where there are multiple gauge potential configurations that satisfy the condition. These configurations are called Gribov copies and the fact that we do not get a unique gauge potential configuration everywhere in spacetime is called Gribov ambiguity.
Now, how is this not a contradiction to the standard picture of the QCD vacuum? When there is only a unique non-degenerate ground state, there is no tunneling between degenerate vacua and therefore no $\theta$ parameter, right?
No! There is still tunneling and also a $\theta$ parameter. In the axial gauge, the tunneling starts from the unique ground state and ends at the same unique ground state. (In the Coulomb gauge the tunneling happens between the Gribov copies?!)
To understand this, we need an analogy.
A nice analogy to the QCD vacuum is given by the following Hamiltonian:
$$ H= – \frac{d^2 }{d x^2} + q(1-cos x) ,$$
where $-\infty \leq x \leq \infty$ and which describes a particle in a periodic potential $V(x) = q (1-cos x)$. Therefore, this situation is quite close to the standard picture of the QCD vacuum, with a periodic structure and infinitely many degenerate ground states.
For this Hamiltonian, we have the Schrödinger equation
$$ – \frac{d^2 \psi}{d x^2} + q(1-cos x) \psi = E \psi . $$
(Among mathematicians this equation is known as the “Mathieu equation”. Sometimes it’s useful to know the name of an equation, if you want to dig deeper.)
However, exactly the same Hamiltonian describes a “quantum pendulum”. This interpretation only requires that we treat our variable as an angular variable: $x \to \phi$, with $0 \leq \phi \leq 2 \pi$ and thus
$$ – \frac{d^2 \psi}{d \phi^2} + q(1-cos \phi) \psi = E \psi . $$
Now, we identify the point $2 \pi$ with $0$ and all values of $\phi$ that are larger than $2 \pi$, with the corresponding points in the interval $0 \leq \phi \leq 2 \pi$. This implies immediately that $ \Psi(\psi + 2\pi) = \Psi(\psi) $. Therefore, the situation now looks like this:
and we no longer have infinitely many degenerate ground states, but only a unique ground state! Therefore, the situation here is exactly the same as for the QCD vacuum in a physical gauge.
Now, what about tunneling?
For a long pendulum, i.e. for large $q$, the ground state $\psi =0$ and excited states are approximately the same as for a harmonic oscillator. For large $q$, we can do a perturbative analysis in $q^{-1/2}$ and take the “anharmonicity” this way into account. However, the famously this perturbation series does not converge, because we miss something important in our analysis. Even for a pendulum with small energy, i.e. with only small perturbations around the ground state, the pendulum can “tunnel”. In this context, this means that the pendulum does a motion that it isn’t allowed to do, like rotate once around its suspension and end up again and the ground state. This is exactly what the instantons describe in a physical gauge like the axial gauge. There is no tunneling between degenerate ground states because there are no degenerate ground states. Instead, we have tunneling that starts at the unique ground state and ends again at the unique ground state. Still, this is tunneling, because there is a potential barrier that prohibits that a pendulum rotates once completely around its suspension. For a pendulum with low energy, or equally a long pendulum (large $q$), we can do the usual quantum mechanical perturbation analysis. This yields harmonic oscillator states plus small corrections from the anharmonicity. However, we must take into account that there are also quantum processes, like the tunneling once around the suspension of the pendulum.
Okay, fine. But what about $\theta$?
Well, now that we have understood that there can also be tunneling in the physical gauge picture of the QCD vacuum, which corresponds to the pendulum interpretation of the Hamiltonian in the example above, we can argue that there can be again a $\theta$ parameter. This is the phase that the pendulum picks up when it tunnels around its suspension. In a quantum theory, we can have $\Psi(\psi + 2\pi) = e^{-i \theta} \Psi(\psi) $ instead of $ \Psi(\psi + 2\pi) = \Psi(\psi) $.
When we interpret the Hamiltonian in the example above as the movement of a particle in a periodic potential, the parameter $\theta$ describes different states in the same system, completely analogous to the Bloch momenta in solid-state physics.
However, in the pendulum interpretation different $\theta$ describe different systems, i.e. different pendulums! Thus, in this second interpretation, it is much clearer why $\theta$ is a fixed parameter and not allowed to change.
To bring this point home, let’s consider an explicit example how a $\theta$ parameter can arise for the quantum pendulum.
The pendulum only picks up a phase $\theta$, when it moves in an Aharonov-Bohm potential. To make this explicit, let’s assume the pendulum carries electric charge $e$ and rotates around a solenoid with magnetic flux $\theta$. This magnetic flux is the source of a potential $ A$ in the plane of the rotating pendulum.
We get the Hamiltonian that describes this system by replacing the derivative with the covariant derivative:
$$ H= – \left(\frac{d }{d \phi} -ie A\right)^2+ q(1-cos \phi) ,$$
and thus we have the Schrödinger equation
$$ – \left(\frac{d }{d \phi} -ie A\right)^2 \psi+ q(1-cos \phi) \psi= E \psi .$$
As before, we impose the condition $ \Psi(\psi + 2\pi) = \Psi(\psi) $. However, we can also introduce a new wave function $\varphi (\psi) $ that obeys the standard Schrödinger equation without the additional vector potential
$$ – \frac{d^2 \varphi}{d \phi^2} + q(1-cos \phi) \varphi = E \varphi ,$$
$$ \psi(\phi) = \text{exp} \left[ ie \int_0^\phi A d\phi \right] \varphi(\phi).$$
(Take note that the relation between the magnetic flux $\theta$ and the potential $A$ is $ \int_0^{2\pi} A d\phi = \theta $).
The information about the presence of the magnetic flux and hence of the vector potential $ A$ is now, when we use $\varphi(\phi)$ instead of $\Psi (\phi)$, encoded in the boundary condition:
$$ \varphi(\phi + 2\pi) = e^{-ie\theta} \varphi(\phi). $$
The energy of the ground state of the pendulum is directly proportional to the magnetic flux:
$$ E (\theta) \propto (1- \cos(\theta)) .$$
This show that in this model, the parameter $\theta$ defines different systems, namely quantum pendulums in the presence of different Aharonov-Bohm potentials.
In contrast, in the periodic potential picture, where $\theta$ is interpreted as analogon to the Bloch momentum, the parameter $\theta$ describes different states of the same system.
The reinterpretation of the QCD vacuum in a physical gauge with a unique non-degenerate vacuum, thus makes the appearance of $\theta$ much less obvious. This is why the standard presentation of the topic still makes use of the temporal gauge and the periodic vacuum picture.
The analysis of the QCD vacuum in the axial gauge is analogous to the interpretation of the Hamiltonian $$ H= – \frac{d^2 }{d \phi^2} + q(1-cos \phi) $$ as description of a quantum pendulum, i.e. substituting $x \to \phi$, with $0\leq \phi < 2 \pi$. (This interpretation also arises, when we work in the temporal gauge and declare that all gauge transformations (large and small) should not have any effect on the physics. The distinct degenerate vacua in the usual interpretation are connected by large gauge transformations. )
Without any further thought, one reaches immediately the conclusion that there is no $\theta$ parameter here. However, this is not correct, because a $\theta$ parameter can appear when there is an Aharonov-Bohm potential present.
When the quantum pendulum swings in such a potential, it picks up a phase when it rotates once around the thin solenoid that encloses the magnetic flux. The phase is directly proportional to the magnetic flux in the solenoid.
For the QCD vacuum, the same story goes as follows. In the axial gauge, naively there is no $\theta$ parameter because we do not have a periodic potential and hence no Bloch-momentum. However, nothing forbids that we add the term
$$ – \frac{g^2 \theta}{32\pi^2} Tr(G_{\mu\nu} \tilde{G}^{\mu\nu}),$$
where $\tilde{G}^{\mu\nu}$ is the dual field-strength-tensor: $\tilde{G}^{a,\mu \nu} = \frac{1}{2} \epsilon^{\mu \nu \rho \sigma} G^a_{ \rho \sigma}$, to the Lagrangian. This simply means that we allow for the possibility that there is an Aharonov-Bohm type potential and that it could make a difference when the pendulum rotates once around its suspension.
An obvious question is now, what the analogon to the solenoid is for the QCD vacuum. So far, I wasn’t able to find a satisfactory answer. The usual argument for the addition of $ – \frac{g^2 \theta}{32\pi^2} Tr(G_{\mu\nu} \tilde{G}^{\mu\nu})$ to the Lagrangian is that nothing forbids its existence.
So far, all experimental evidence point in the direction that there exists no “solenoid” for the QCD vacuum and therefore $\theta =0$. (The current experimental bound is $\theta < 10^{-10}$, Source).
From the analysis of the QCD vacuum in the axial gauge and by comparing it to the quantum pendulum, this does not look too surprising. However, we shouldn’t be too quick here and state $\theta =0$. Before we can say something like this, we need to understand first, what the “solenoid” could be for the QCD vacuum.
Understanding this requires that we enter a completely different world: the world of anomalies. This fascinating topic deserves its own post. Usually, it is claimed that the contribution to $\theta$ that comes from this sector of the theory is completely unrelated to the QCD $\theta$. However, we will see that anomalies and the QCD vacuum aren’t that unrelated: So far, we were only concerned with the gauge boson vacuum, while anomalies arise when we consider the fermion vacuum and its interaction with gauge bosons!
This will be discussed in part 4 of my series about the QCD vacuum.
References that describe this perspective of the QCD vacuum
• “Topology in the Weinberg-Salam theory” by N. S. Manton
• “The Interpretation of Pseudoparticles in Physical Gauges” by Claude W. Bernard, Erick J. Weinberg
• Section 11.3 in Rubakov’s “Classical Theory of Gauge Field”
• This perspective of the QCD vacuum in more abstract terms without the quantum pendulum analogy is described in “Introduction to the Yang-Mills Quantum Theory” by R. Jackiw in the around Eq. (42).
My email address is...
No spam guaranteed. Unsubscribe at any time. |
40b8df090340d42b | The Art of International Negotiations
The methodology of engaging international relations seems to be breaking down. Two issues that come to mind are the US attitude to the International Criminal Court, and Brexit.
Regarding the ICC, on September 10, John Bolton, the US National Security Advisor, announced that Washington would “use any means necessary” to push back against the influence of the ICC. The ICC was established in 2002, and has succeeded in convicting a number of war criminals from Africa and former Yugoslavia, although one can question exactly the nature of the sovereignty of the broken laws. Thus a senior military man could be prosecuted for the actions actually carried out by more junior soldiers, even in the absence of clear evidence of such orders. Obviously, people carrying out, or even worse, ordering murder, torture, etc, need punishing, but there also needs to be some sort of sovereignty, the reason being that, in my mind anyway, justice needs to be blind to the origin or nature of the perpetrators. If it is only the losing side that gets prosecuted, it is essentially victor’s justice, which is usually little better than revenge. Given that the US, Israel, China and Saudi Arabia have refused to ratify the founding document, on the basis that it had unacceptable consequences to national sovereignty, the concept of “international” is clearly questionable.
Now, as far as I know, no US citizen has ever been indicted, probably because it would be futile, but apparently there has been agitation regarding US soldiers in Afghanistan, particularly regarding alleged torture of detainees. Now, the argument then is, if the crime took place in Afghanistan, the fact that the US has not ratified the court is irrelevant, and any perpetrator of a crime against a ratified member can be prosecuted, irrespective of the nationality, or at least that is the view of the ICC. Of course, arresting such a person is another matter. Here, however, there is a further issue. Some of what is alleged, e.g. waterboarding and indefinite detention without due process, apparently occurred with the permission of very senior US officials and politicians, and apparently the President. This raises the question, exactly how does such an organization decide whether the President of the United States has ordered or permitted something that is illegal? But if the United States is exempt, why are lesser countries susceptible to prosecution? Is it a case of might makes right?
In any case, Bolton’s statement that the US would ban any such members of the ICC from entering the US, and it would sanction their funds and prevent them from using the US financial system is certainly a shot across the bow. The question then is, is this the way of going about negotiations? Or does the US feel there is no alternative? It is certainly acting as if the rest of the world is some sort of unfortunate added extra. In terms of international relations, the United States, through President Trump’s recent speech at the UN, has effectively declared it feels it wishes to separate its interests from those of the rest of the world. America first! I for one agree that all is not right with the UN, but I do not believe that attitude helps.
The Brexit negotiations are more confusing. The EU rules meant that when Britain elected to leave, there was a two-year period to sort out all the consequences, but at least the last six months of that appeared to be required to put the agreement in place, which left 18 months to reach the agreement. That has almost expired. The EU has decided that the UK has been “dawdling”, and trying to present the EU with a deal that would have to be agreed at the last minute, or no deal. The problem with that approach is that “no deal” works both ways, and the assumption that the other side is desperate to have a deal may be misguided. However, there are issues on which the EU is quite obstinate. One is that if the UK wants access to the EU markets, Britain must accept the free movement of citizens, and stopping that is one of the reasons Britain elected to leave the EU. There are other demands by the EU: manufactured goods must be by the EU rulebook; the European Court of Justice will have overall jurisdiction; the UK must retain European labour and environmental laws. Now it is reasonable to require such things for goods that are shipped to the EU, but the EU should have no say on goods that do not touch the EU as it is none of their business.
Some seem to predict a total disaster for the UK if they leave with no deal, however we should note that the UK buys £318 billion from the EU, and exports £235.8 billion. So, if all trade stopped, the EU would suffer an extra £82 billion. But the situation is worse than that because Britain’s exports of manufactured goods to Europe include an extensive array of parts, etc. These days, large complicated objects are not made by one company, but rather they are assembled from parts supplied by a large number of different manufacturers. So trade will not stop, and it is in both sides’ interests to keep it going with as few hold-ups as possible.
The other major problem is the Northern Ireland border. Theresa May offered a tolerably straightforward solution, which would allow smooth crossing of the border provided certain “paperwork” (essentially electronic in this case) was properly completed. The EU have responded by saying Northern Ireland must remain fully within the customs union, which effectively means that Northern Ireland would become part of Eire in all but name. No UK prime minister could accept that. As a negotiating stance, President Macron of France has stated the British plan is unacceptable because “it does not respect the integrity of the single market.” Effectively that is saying, either be in the EU or do not trade with it. That is a fairly tough stance. President Macron went further and called some of the Brexiteers liars. Not exactly diplomatic.
There is fairly clear evidence the attitude towards the UK from Brussels has hardened, and they seem to be forcing Britain to opt for “no deal”. Mrs May, being pushed into a corner, has responded by saying that it was unacceptable for the EU to reject her plan and offer nothing in return except “no Brexit”. To succeed in negotiations, both sides need something, and in this case, both sides need trade to continue. Neither side does well out of a failure. But both sides also need reasonably good will, and a desire to reach an agreement. Not a lot of promise there. It is hard to get rid of entrenched pig-headedness.
Space and the Military
One of the more distressing pieces of news recently is that President Trump wants to create a “Space Force” as a branch of the US armed forces. According to Vice President Pence: “Other nations increasingly possess the capability to operate in space, not all of them, however, share our commitment to freedom, to private property, and the rule of law. So as we continue to carry American leadership in space, so also will we carry America’s commitment to freedom in this new frontier.” And, “Our adversaries have transformed space into a warfighting domain already. . . history has proven that peace only comes through strength. And in the realm of outer space, the United States Space Force will be that strength in the years ahead.” There are two reasons I find this troublesome. The obvious one is we do not need war in space, although, of course, if someone else is taking their military to space, it is reasonable to respond. That leaves open the question, is anybody else taking their military to space? The second one is there is a UN convention that says space will be reserved for peaceful purposes, and in the absence of clear evidence of some other violation it appears that the current administration is going to ignore this convention, which has the deeper problem that if the US is not going to honour its agreements, what is the point of anyone else negotiating? So why? It appears what is becoming an only too familiar excuse: the Russians have done it.
Done what? The case made by Yleem Poblete (State Department, and fuller text at ) was that Russia has a satellite that has been behaving oddly, and very suspiciously. The first problem here is the “suspicious satellite” was not identified. The point of concern for Poblete was that Russia has deployed a satellite they claim to be an inspector satellite in October, 2017, and the US thinks it is doing something that is contrary to that claim. So what is it doing? Apparently its orbital behaviour was considered inconsistent with what the US considered an inspector satellite would have done. That raises the question, what did it do and what was it expected to do? Poblete goes on to say the only certainty is that it is in orbit. The rest of its behaviour is unexpected and unclear to purpose.
Russia did not launch anything that could be so described in October, 2017, but it did deploy a subsatellite (Cosmos 2523) which separated from a major satellite then. Apparently Russia launched Cosmos 2519 in June 2017, and in August a subsatellite Cosmos 2521 separated from it. In October, Cosmos 2523 separated from one of these two. These subsatellites then carried out various manoeuvres and as an example, 2521 may have returned and docked with 2519. They all changed their orbits to have different characteristics. None of these manoeuvres were illegal or threatening and while we don’t know what they were for and I suppose we don’t know everything about them, it seems strange to get overly concerned about this. In my opinion, the simplest explanation is that the Russians were practising controlled orbital manoeuvres, possibly under automated control, which, of course, would be highly desirable in any space exploration program.
It is true Poblete raised a very legitimate point: how do you verify what a satellite is actually doing? The same thing goes the other way, of course. One point of concern for me, though, is that this is certainly not a reason to launch a military response. The other question is, is this a straw man accusation, something to politically justify this space force concept?
There is the implied claim that Russia is developing and deploying anti-satellite weapons. Let us leave aside the obvious question as to what evidence is there, and ask instead, why would they do that? The most obvious reason is that the US uses military satellites to carry out surveillance on ground activities (and if some sources are to be believed, with extreme accuracy) and also many US guided weapons depend on satellite positioning to steer them. Therefore the accusation is probably true, but it is rather understandable, and I would be surprised if the US military is not doing the same thing to counter Russian satellites. The point I am making here is that the militaries of the world have already taken notice that space exists.
So, is there anything more that a satellite could do, other than carry out surveillance, aid navigation and carry messages? Could it be a weapon? At this stage, I feel it is unlikely, the reason being that any “ammunition” has to be taken up there. It is reasonably easy, although very expensive, to take up electronics, etc, but something that will do damage to something else on the ground is another matter. One might think that taking a hydrogen bomb would allow it a faster attack, but that is not true. Something in orbit has orbital velocity, and re-entering the atmosphere at that speed leads to intense heat generation, and if you use the atmosphere to reduce the speed, it actually takes longer to arrive than a slower missile launch. There is a case for shooting down other satellites, but it is still probably easier to do that from Earth. You will hear postulates of lasers, etc, but to get a laser powerful enough to do real damage, the power demands involve a huge beast. There are much easier ways to damage a satellite, and the probability that there are satellites up there that will seriously damage any given country is probably fairly remote.
One thing that has become a problem is that more than one country has tested anti-satellite weapons by destroying one of their own defunct satellites. The problem then is, what does “destroy” actually mean? Usually it seems to mean, blow the thing up into many pieces, which then go onto erratic orbits, with velocities probably in the order of 7,500 m/s. Now if the orbit were circular, that would be fairly harmless to anything on a corresponding circular orbit because they would never meet, but the fragments of an explosion will have a variety of eccentric orbits on different planes, and while the collisions will not have that relative velocity, the relative velocity could still be in the few thousand meter per second range, and that is a distinctly dangerous velocity. A moderate-sized piece of metal would make a cannonball seem modest.
As it is right now, orbital space around Earth is starting to get cluttered. I have heard people argue that NASA should investigate asteroid mining. As of now, I am not sure why, because asteroids, apart from a possible iron/nickel core, will have the composition of space dust, and hence have some similarities to basalt on Earth. Nobody wants to mine that. On the other hand, this space junk is made of already refined metals. I rather fancy that collecting that space junk and recycling it would make more sense.
In the meantime, it would also be helpful if the nations could behave in a way that did not lead to weaponizing space.
Have you got what it takes to form a scientific theory?
Making a scientific theory is actually more difficult than you might think. The first step involves surveying what knowledge is already available. That comes in two subsets: the actual observational data and the interpretation of what everyone thinks that set of data means. I happen to think that set theory is a great start here. A set is a collection of data with something in common, together with the rule that suggests it should be put into one set, as opposed to several. That rule must arise naturally from any theory, so as you form a rule, you are well on your way to forming a theory. The next part is probably the hardest: you have to decide what interpretation that is allegedly established is in fact wrong. It is not that easy to say that the authority is wrong, and your idea is right, but you have to do that, and at the same time know that your version is in accord with all observational data and takes you somewhere else. Why I am going on about this now is I have written two novels that set a problem: how could you prove the Earth goes around the sun if you were an ancient Roman? This is a challenge if you want to test yourself as a theoretician. If you don’t. I like to think there is still an interesting story there.
From September 13 – 20, my novel Athene’s Prophecy will be discounted in the US and UK, and this blog will give some background information to make the reading easier as regards the actual story not regarding this problem. In this, my fictional character, Gaius Claudius Scaevola is on a quest, but he must also survive the imperium of a certain Gaius Julius Caesar, aka Caligulae, who suffered from “fake news”, and a bad subsequent press. First the nickname: no Roman would call him Caligula because even his worst enemies would recognize he had two feet, and his father could easily afford two bootlets. Romans had a number of names, but they tended to be similar. Take Gaius Julius Caesar. There were many of them, including the father, grandfather, great grandfather etc. of the one you recognize. Caligulae was also Gaius Julius Caesar. Gaius is a praenomen, like John. Unfortunately, there were not a lot of such names so there are many called Gaius. Julius is the ancient family name, but it is more like a clan, and eventually there needed to be more, so most of the popular clans had a cognomen. This tended to be anything but grandiose. Thus for Marcus Tullius Cicero, Cicero means chickpea. Scaevola means “lefty”. It is less clear what Caesar means because in Latin the “ar” ending is somewhat unusual. Gaius Plinius Secundus interpreted it as coming from caesaries, which means “hairy”. Ironically, the most famous Julius Caesar was bald. Incidentally, in pronunciation, the latin “C” is the equivalent of the Greek gamma, so it is pronounced as a “G” or “K” – the difference is small and we have now way of knowing. “ae” is pronounced as in “pie”. So Caesar is pronounced something like the German Kaiser.
Caligulae is widely regarded as a tyrant of the worst kind, but during his imperium he was only personally responsible for thirteen executions, and he had three failed coup attempts on his life, the leaders of which contributed to that thirteen. That does not sound excessively tyrannical. However, he did have the bad habit of making outrageous comments (this is prior to a certain President tweeting, but there are strange similarities). He made his horse a senator. That was not mad; it was a clear insult to the senators.
He is accused of making a fatuous invasion of Germany. Actually, the evidence is he got two rebellious legions to build bridges over the Rhine, go over, set up camp, dig lots of earthworks, march around and return. This is actually a text-book account of imposing discipline and carrying out an exercise, following the methods of his brother-in-law Gnaeus Domitius Corbulo, one of the stronger Roman Generals on discipline. He then took these same two legions and ordered them to invade Britain. The men refused to board what are sometimes called decrepit ships. Whatever, Caligulae gave them the choices between “conquering Neptune” and collecting a mass of sea shells, invading Britain, or face decimation. They collected sea shells. The exercise was not madness: it was a total humiliation for the two legions to have to carry these through Rome in the form of a “triumph”. This rather odd behaviour ended legionary rebellion, but it did not stop the coups. The odd behaviour and the fact he despised many senators inevitably led to bad press because it was the senatorial class that wrote histories, but like a certain president, he seemed to go out of his way to encourage the bad press. However, he was not seen as a tyrant by the masses. When he died the masses gave a genuine outpouring of anger at those who killed him. Like the more famous Gaius Julius Caesar, Caligulae had great support from the masses, but not from the senators. I have collected many of his most notorious acts, and one of the most bizarre political incidents I have heard of is quoted in the novel more or less as reported by Philo of Alexandria, with only minor changes for style consistency, and, of course, to report it in English.
As for showing how scientific theory can be developed, in TV shows you find scientists sitting down doing very difficult mathematics, and while that may be needed when theory is applied, all major theories start with relatively simple concepts. If we take quantum mechanics as an example of a reasonably difficult piece of theoretical physics, thus to get to the famous Schrödinger equation, start with the Hamilton-Jacobi equation from classical physics. Now the mathematician Hamilton had already shown you can manipulated that into a wave-like equation, but that went nowhere useful. However, the French physicist de Broglie had argued that there was real wave-like behaviour, and he came up with an equation in which the classical action (momentum times distance in this case) for a wave length was constant, specifically in units of h (Planck’s quantum of action). All that Schrödinger had to do was to manipulate Hamilton’s waves and ensure that the action came in units of h per wavelength. That may seem easy, but everything was present for some time before Schrödinger put that together. Coming up with an original concept is not at all easy.
Anyway, in the novel, Scaevola has to prove the Earth goes around the sun, with what was available then. (No telescopes that helped Galileo.) The novel gives you the material avaiable, including the theory and measurements of Aristarchus. See if you can do it. You, at least, have the advantage you know it does. (And no, you do not have to invent calculus or Newtonian mechanics.)
The above is, of course, merely the background. The main part of the story involves life in Egypt, the aanti-Jewish riots in Egypt, then the religious problems of Judea as Christianty starts.
Memories from Fifty Years Ago: Invasion of Czechoslovakia 3.
I returned to the kiosk at five, as requested, and was surprised to be invited by the woman in the kiosk to stay the night at their apartment. So I drove her home, and she must have been a bit surprised at the car, particularly now that before setting off I refilled the clutch hydraulic oil. The leak was now getting rather bad, and there were only so many clutch usages before a refill, and the number was getting smaller. Anyway, we made it to her apartment, where I met the husband. The Heitlegnerov (I apologise if I got the spelling wrong from memory) apartment was compact, but it seemed to have everything I would expect in a modern western apartment. The previous year I had been in Calgary, so I knew what a modern North American apartment looked like, and the Czech one was much better than where I was in England.
This family had a rather bad history. First, they were Jews, and had spent most of WW II hiding in the forests, living in huts with dirt floors. The husband had been part of a resistance to the Germans, and when the war was over, he had actually helped get the communists into government, only to find the communists in Czechoslovakia were also anti-Jewish. Back to mud floor accommodation for a while. Gradually things got better, and when Dubcek came to power, they got up in the world sufficiently to get this apartment. Now they saw it all coming down around their ears. However, by accident, their daughter, Alenka, was in England on a short stay to help her learn English. The parents had discussed this, and they wanted to send a message for her to stay in England, and would I take some family heirlooms and some of her property? Of course I would, with volume restrictions on obviously women’s things.
The following morning it was announced that the road to Linz was open at the border, so I set off early. Somehow, the day seemed grim, and very quiet. For a major city, nothing was happening. The day did not get better, and when I drove through České Budějovice the continued absence of activity maintained the depressing feeling. It was just as I was leaving České Budějovice that I noticed two young Czechs hitch-hiking. Since I had not seen any cars for a long time, their prospects were poor, so I stopped. They first wanted me to smuggle them out, but I pointed out that was impossible. Any cursory search would find them, but I would take them to the border, let them out before it and they would be on their own. I would wait on the other side for a while, in case they made it. Then they wanted me to smuggle something else: a petition to the United Nations, signed by (according to them) half a million identified signatures. I agreed. I had a tall cardboard box in the boot, and for my trip behind the iron curtain I had taken emergency food: canned food, drink, fruit and rye bread. I had kept the waste, including opened cans because I could not find anywhere to dump rubbish. The petition was wrapped in pastic bags and went to the bottom, a piece of a different cardboard box went on top, just in case although that was probably worthless as a deception, the cans went on top, then rotting fruit, then some mouldy bread, then some fruit that was technically still edible, then the remains of the rye bread, then can openers, cutlery, etc.
When I got to the border, the guards were Czech, but they still did a search. When they came to the box, they asked what was that? I pointed out I was just being tidy and tried to look as iunconcerned as I could. They started ferretting but it got increasingly distasteful and they gave up. The barrier went up, and I was in “no-man’s land”. When I got to the Austrian guards, there were the two Czechs, beaming with triumph. They had got throough before me, while I was being searched, and had told the Austrian guards about the petition. They thought this was mission accomplished. I had no option but to hand the petition over, and while the expressions on the faces of the Czech guards was worth seeing, I was thoroughly depressed. I had taken a huge risk, and for what? The Austrian guards would at best destroy the petition; at worst hand it back to the Czech authorities. Austria was never going to annoy Russia. As I headed to Linz I was stopped by a journalist who wanted the story and a picture of me and my beatup Anglia carrying a Czech flag. I have no idea whether it ever got published.
When I got back to England on the first Saturday I went up to London and to the address where Alenka was staying. It was a grey day with light rain, and the family, being orthodox Jews, left me there standing in the rain. Alenka came to the door, I handed over her valuables, and tried to give as cheerful account as I could of her parents and their feelings. I asked her what she wanted to do. Apparently there were a few scholarships being made available to Czechs who could find a place in a University, and I promised to do what I could at Southampton for her. As it happened, I found a Post-doc was treated as staff, and on my recommendation she could go there, but as it happened, somewhere else was found for her (I think East Anglia). However, that did not last, and eventually she got homesick and returned to Czechoslovakia, where things were seemingly improving a little. It would not be helpful for someone in a communist country then to be corresponding with the West so I never heard from her or her parents again. I am naturally curious as to where her life took her, but I guess I shall never know. |
5cb0376c58c29077 | Tuesday, April 30, 2019
Possible Worlds and Possible Lives: A Meditation on the Art of Living
Here’s a simple thought, but one that I think is quite profound: one’s happiness in life depends, to a large extent, on how one thinks about and navigates the space of possible lives one could have lived. If you have too broad a conception of the space of possibility, you are likely to be anxious and unable to act, always fearing that you are missing out on something better. If you have too narrow a conception of the space of possibility, you are likely to be miserable (particularly if you get trapped in a bad set of branches in the space of possibility) and unable to live life to its full. But it’s not that simple either. Sometimes you have to focus on the negative and sometimes you have to narrow your mindset.
I say this is a profound but simple thought. Why so? Well, it strikes me as profound because it captures something that is fundamentally true about the human condition, something that is integral to a number of philosophical discussions of well-being. It strikes me as simple because I think it’s something that is relatively obvious and presumably must have occurred to many people over the course of human history. And yet, for some reason, I don’t find many people talking about it.
Don’t get me wrong. Plenty of people talk about possible worlds in philosophy and science, and many specific discussions of human life touch upon the idea outlined in the opening paragraph. For example, discussions of human emotions such as regret, or the rationality of decision-making, or the philosophical significance of death, often touch upon the importance of thinking in terms of possible lives. What frustrates me about these discussions is that they don’t do so in an explicit or integrated way.
This article is my attempt to make up for this perceived deficiency. I want to justify my opening claim that one’s happiness in life depends on how one thinks about and navigates the space of possible lives; and I want to further support my assertion that this is a simple and profound idea. I start by clarifying exactly what I am talking about.
1. The Basic Picture: The Space of Possible Lives
The actual world is the world in which we currently live. It can be defined in terms of a list of propositions that exhaustively describes the features of this world. A possible world is a world that could exist. It can be defined as any logically consistent set of propositions describing a world. The space of logically possible worlds is vast. Logical consistency is only a minor constraint on what is possible. Virtually anything goes if this is your only limitation on what is possible. For example, there is a logically possible world in which the only things that exist are an apple and a cat, inside a large box.
This possible world isn’t very likely, of course, and this raises an important point. Possible worlds can be ordered in terms of their accessibility to us. It is easiest to define this in terms of the “distance” between a possible world and the actual world in which we live. Worlds that are ‘close’ to our own world (in the sense that they differ minimally) can be presumed to be relatively accessible to us (though see the discussion below of determinism and free will); contrariwise, worlds that are ‘far away’ (in the sense that they have many differences from our own world) are relatively inaccessible. Some possible worlds will require a technological breakthrough to make them accessible to us (e.g. a world in which interstellar travel is possible for creatures like us); others may never be accessible to us because they breach the fundamental physical laws of our reality (e.g. a world in which universal entropy is reversed). Philosophers often distinguish between these different shades of possibility by using phrases like “physical possibility”, “technical possibility” and so on. Probability is also an important part of the discussion as it gives us a way of quantitatively ranking the accessibility of a possible world.
The idea of a “possible life” can be defined in terms of a possible world. Your actual life is the life you are currently living in the actual world. A “possible life” is simply a different life that you could be living in another possible world. One way of thinking about this is to simply imagine a different possible world where the only differences between it and the actual world relate specifically to your life. Possible lives exist in the past and in the future. There are possible lives that I could have lived and possible lives that I might yet live. For example, there is a possible life where I studied medicine at university rather than law. If I had followed that path, my present life could be very different. Likewise, there is possible life where I run for political office in the future. If I follow that path, my life will end up being very different from what I currently envisage.
Possible lives can be arranged and ranked in a number of different ways. Obviously, they can be ranked in terms of their accessibility to us (as per the previous discussion of possible worlds), or they can be ranked in terms in their normative value to us. Some possible lives are better than others. A possible life in which I murder someone and get sent to jail for life is presumably going to be worse (for me and for others) than a world in which I work hard and discover a cure for some serious disease.
Pictures are worth a thousand words so consider the image below. It illustrates what I would take to be the fundamental predicament of life. In the centre of the image is a person. Let’s suppose this person is you. The thick bold line represents your actual life (i.e. the life, out of all the possible lives you could have lived, that you are actually living). To the left of your present location is your past and arranged along each side of the thick bold line are the possible lives you could have lived before the present moment. To the right of your present location is your future and arranged along each side of the centre line are the possible lives you might yet live. The possible lives that lie above the line represent lives that are better than your current, actual, life; the possible lives that lie below the line represent lives that are worse than your current life. The accessibility of lives can also be represented in this image. We can assume that the further a life lies from the centre line, the less accessible it is (though in saying this it is important to realise that accessibility does not correlate with betterness or worseness, which is an impression you might get from the way in which I have illustrated it).
The Human Predicament: The Space of Possible Lives
The essence of my position is that how we think about our predicament — nested in a latticework of possible lives — will to a large extent determine how happy and successful we are in our actual life. In particular, broadening and narrowing our conception of the set of possible lives we could have lived, and might yet live, is key to happiness.
2. The Elephant in the Room: Determinism
Before I go any further, I need to address the elephant in the room: determinism. Determinism is a philosophical thesis that holds that every event that occurs in this actual world has a sufficient cause of its existence in the prior events in this world. The life you are living today is the product of all the events that occurred prior to the present moment. Given those events, there is no other way the present moment could have turned out. It simply had to be this way.
There is another way of putting this. According to one well-known philosophical definition of determinism — first coined, I believe, by Peter Van Inwagen — determinism is the view that there is only one possible future. Given the full set of past events (E1…En) there is only one possible next event (En+1), because those prior events fully determine the nature and character of En+1).
If determinism is true, it would seem to put paid to the argument I’m trying to put forward in this article. After all, if determinism is true, it would seem to follow that all talk about the possible lives we could have lived, and might yet live, is fantastical poppycock. There is only one life we could ever live and we may as well get used to it.
But I don’t quite see it that way. In this regard, it’s worth remembering that determinism is a metaphysical thesis, not a scientific one. No amount of scientific evidence of deterministic causation can fully confirm the truth of determinism. And, what’s more, there are some prominent scientific theories that seem to be open to some degree of indeterminism (e.g. quantum theory) or, if not that, are at least open to “possible worlds”-thinking. It is worth noting, for example, that some highly deterministic theories in cosmology and quantum mechanics only preserve their determinism if they allow for possibility of multiple universes and many worlds. The most famous example of this might be the “many worlds” interpretation of quantum mechanics, first set out by Hugh Everett. This interpretation retains the determinism of the quantum mechanical Schrödinger equation but only does so by holding that there are many different worlds in existence. These worlds may or may not be accessible to us, but it is not illegitimate to talk about them.
Admittedly, these esoteric aspects of cosmology and quantum theory don’t hold much succour for the kind of position I’m defending here. But that brings me to a more important point. Even if determinism is true (and there is, literally, only one possible future) it does not follow that thinking about one’s life in terms of the possible lives one could have lived and might yet live is illegitimate. If the world is deterministic it is still likely to be causally complex. This means that, even if determinism is true, there will often be no easy way for us to say what caused what and what follows from this.
An analogy might help to underscore this. When I was a student, one of the favoured topics in history class was “The Causes of World War I”. I learned from these classes that there are many putative causes of World War I. It’s hard to say which “cause” was critical, if any. Perhaps World War I was caused by German aggression, or perhaps, as Christopher Clark argues in his book The Sleepwalkers, it was a complex concatenation of events, no one of which was sufficient in its own right. It’s really hard to say. For all we know, in the absence of German aggression, things might have gone very differently. Or maybe they wouldn’t. Maybe we would have stumbled into a great war anyway. Historians and fiction writers love to speculate, and it’s often useful to do so: we gain insight into the past by imagining the counterfactuals, and gain wisdom for the future by thinking through the different possible worlds.
What is true for historians and fiction writers is also true for ourselves when we look at our own lives. Our own lives are causally complex. For any one event that occurred in our past (or that may yet occur in our future) there is probably a whole panoply of events that may or may not be critical to its occurrence. As a result, for all we know, there may have been other lives we could have lived and may yet live. To put this more philosophically, even if it is true that we live in a metaphysically deterministic world in which there is only one possible future, to all intents and purposes we still live in an epistemically indeterministic world in which multiple possible futures seem to still be accessible to us.
In this respect, it is important to bear in mind the distinction between fatalism and determinism. Just because the world is deterministic, does not imply that we play no part in shaping its future. We still make a difference and in order to make sense of the difference we might make, we need to entertain “possible worlds”-thinking.
All of this leads me to conclude that determinism does not scupper the argument I am trying to make.
3. Looking Back: Regret, Guilt and Gratitude
If we accept that it is legitimate to think in terms of possible lives, then we open ourselves up to the idea that thinking wisely about the space of possibility is key to happiness and success. To illustrate, we can start by looking back, i.e. by considering the life we are living in the present moment relative to the other lives we might have lived before the present moment.
If, when we do this, we focus predominantly on possible lives that would have been better than the life we are currently living (along whatever metric of “betterness” we prefer), we are likely to be pretty miserable. We will tend to be struck by the sense that our actual life does not measure up. There are better lives we could have been living. Two emotions/attitudes are commonly associated with this style of thinking. The first is regret. This is both a negative feeling about your present life and a judgment that it is inferior to other possibilities. Regret is usually tied to specific past decisions. We regret making those decisions and judge that we could have done better. Sometimes, regret is more general and vague. There is no specific decision that we regret, but we are filled with the general sense that things are not as good as they could be. When the choices we make end up doing harm to others, regret can turn into a guilt. We can become wracked by the sense that not only are our lives worse than they might have been, but we have failed in our moral duties too.
As I noted on a previous occasion, I find my own thoughts about the past to be preoccupied by feelings of regret and guilt. I regret not making certain decisions earlier in life (e.g. getting married, having children) or not seizing certain opportunities (e.g. better jobs and so on). This regret can sometimes be overwhelming, even though I acknowledge that it is often irrational. Given the aforementioned causal complexity of the real world, there is no guarantee that if I had done things differently they would have turned out for the better. Thinking about regret in these philosophical terms sometimes helps me to escape the trap of negative thinking.
If, when we look to the past, we focus predominantly on possible lives that would have been worse that the life we are currently living, we are likely to pretty happy. I say this with some trepidation. It’s possible that some people have a very low hedonic baseline and so no amount of positive thinking about the past will make them happy, but as a general rule of thumb it seems to follow that happiness flows from focusing on the negative space of possibility in the past. If things could have been much worse than they currently are, then we are likely to think that our present lives are not all that bad. This is, in fact, a classic Stoic tactic for ensuring more contentment in life: always imagine how things might have been worse.
Two emotions/attitudes are commonly associated with this style of thinking. The first is achievement. This is a self-directed emotion and judgment that arises from the belief that you have made your life better than it might otherwise have been. You have charted some stormy waters and navigated a way through the space of possibility that avoided bad outcomes (failure, hardship etc). The second is a feeling of gratitude. This is an other-directed (or outward-directed) emotion and judgment that arises from the belief that although you may not have controlled it, your life has turned out better than it might have done. This could be because other people helped you out, or it could be through sheer luck and accident of birth (though some people might like to distinguish the feeling of luck from that of gratitude).
Given these reflections on looking back, you might think there is an easy way to make yourself happy: focus on how your present life is better than many of the possible lives you could have lived, and don’t focus on how it is worse than others. But that’s easier said than done. Sometimes you can get trapped in spirals of negative thinking where you always think about how things could have been better. Furthermore, focusing entirely on how things might have been worse could well be counterproductive. As I noted in an earlier article not all regret is bad. You can learn a lot about yourself from your regrets. You can learn about your desires and personal values. This is crucial when we start to look forward.
4. Looking Forward: Optimism, Pessimism and Death
Although looking back is a useful practice, and although it is often an important source of self-knowledge, ultimately looking forward is more important. This is because we live our lives in the forward-looking direction. Life is a one-way journey to the future. Until we invent a technology that enables us to actually go back in time, we have to resign ourselves to the fact that our main opportunity for exploring possible lives lies in the future.
When looking forward, one question predominates: which of the many possible futures that we could access will we actually end up accessing?
If, when we ask this question, we focus primarily on possible futures that are better than our present lives, we are likely to be quite optimistic. Indeed, focusing on better possible futures and the things you can do to make them more accessible, might be one of the keys to happiness. On a previous occasion, I looked at Lisa Bortolotti’s “agency” theory of optimism. In defending this theory, Bortolotti noted that many forms of optimism are irrational: assuming the future is going to be better than the past is often epistemically unwarranted. Nevertheless, assuming that you have some control over the future — even if this is epistemically unwarranted from an objective perspective — does seem to correlate with an increased chance of success. Bortolotti cited some famous studies on cancer patients in support of this view. In those studies, the cancer patients that believed they could influence their prospects of recovery, through, for example, dietary changes or exercise or other personal health regimes, generally did better than those with a more fatalistic attitude.
If, on the other hand, we focus primarily on futures that are worse than our present predicament, we are likely to be quite pessimistic. If we think that we are on the brink of some major personal or societal failure, and that there is nothing we can do to avert this outcome, then we will have little to look forward to. But, we have to be cautious in saying this. Blindly ignoring negative futures is a bad idea. There is an old adage to the effect that you have to “plan for the worst and hope for the best”. There must be some truth to that. You need to be aware of the risks you might be running. You need to develop strategies to avoid them. Indeed, this willingness to think about and anticipate negative futures is key to the agency theory of optimism outlined by Bortolotti. The more successful cancer patients are not the ones that bury their heads in the sand about their condition and blithely think everything will turn out for the best. They are often very aware of the dangers. They just assume that there is something they can do to avoid the negative possibilities.
There is another point here that I think is key when looking forward. How narrowly or broadly we frame the set of possible futures can have a significant impact on our happiness. A narrow framing arises when we think that there are only one or two possible futures accessible to us; a broader framing arises when think in terms of larger numbers of possibilities. Generally speaking, narrowly framing the future set of possibilities is a bad thing. It encourages you to think in terms of false dichotomies or tradeoffs (either X happens and everything goes badly or Y happens and everything goes well). If you ever find yourself trapped in a narrow framing, it is usually a good idea to take a step back and try to broaden your framing. For example, when thinking about how you might “balance” career ambitions with home and family life, you might have tendency to narrowly frame the future in terms of an either/or choice: either I have a happy family life or a fulfilling career. But usually choices are more complex than that. There are more possibilities and options to explore. Some of those possible futures might allow for a more harmonious balancing of the two goals.
This is not to say that compromises and tradeoffs are always avoidable. They are not. But it is better to reach that conclusion after a full exploration of the set of possible futures than after a cursory search, particularly when it comes to major life choices. Or so I have found. That said, I also think it is possible to have too broad a framing of the possible futures. You can easily become overwhelmed by the possibilities and paralysed by the number of options. Sometimes a narrow framing concentrates the mind and motivates action. It’s all about finding the right balance: don’t be too narrow-minded, try to focus on the positive, but don’t be too open-minded and ignore the negative either.
Three other points strike me as being apposite when looking forward.
First, I think it is worth reflecting on the role that technology plays in opening up the space of possible futures. I briefly alluded to this earlier on when I pointed out that the development of certain technologies (e.g. interstellar spaceships) might make possible futures accessible to us that we never previously considered. Of course, interstellar spaceships are just a dramatic example of a much more general phenomenon. All manner of technological innovations, from penicillin to international flights to smartphones do the same thing: they give us access to futures that would otherwise have been impossible. That’s often a good thing, it gets us out of small, negative spaces of possibility, but remember that technology usually opens up possible futures on both the positive and negative side of the ledger. There are more possible, better futures and more possible negative futures. Techno-optimists tend to exaggerate the former; techno-pessimists the latter.
Second, it is worth reflecting on the importance of “thinking in bets” when it comes to how we navigate the set of future possibilities. Since we rarely have perfect control over the future, and since there is much that is uncertain about the unfolding of events, we have to play the odds and hedge our bets, rather than fixate on getting things “right”. Those who are more attuned to this style of thinking will tend to do better, at least in the long run. But, again, this is often easier said than done because it requires a more reflective and detached outlook on what happens as a result of any one decision.
Finally, we have to think about death. Death is, for each individual, the end of all possibilities. It has an interesting effect on the space of possible lives. Once you die, the network of possible lives you could have lived or might yet live vanishes. All the branches are pruned away. All that is left is one solid line through the space of possibility. This line represents the actual life you lived. What trajectory does that line take through the space of possibility? Does it veer upwards or downwards (relative to the dimension of betterness or worseness)? Does it end on a high or low? Although I am somewhat sceptical of our capacity to control the total narrative of our lives, I do think it is worth thinking, occasionally, about the overall shape we would like our lives to have. Maintaining a gently sloping upward trajectory seems like more of a recipe for happiness than riding a roller-coaster of emotional highs and lows.
5. Conclusion
So where does that leave us? I hope I have said enough to convince you that thinking in terms of possible lives is central to the well-lived life. I also hope I have said enough to convince you that there is no simple algorithm you can apply to this task. You might suppose that you can thrive by not dwelling on how things might have been better in the past, and think more about how they might be better in the future (and, in particular, about how you might make them better). And I am sure that this simple heuristic might work in some cases. But things are not that straightforward. You have to learn from past mistakes and embrace some feelings of regret. You have choose the wisest framing of the future possibility space to make the best choices. There is no one-size-fits-all approach that will guarantee success and happiness.
You might still argue that all of this is trivial and unhelpful. Maybe that is so, but I still maintain my opening position that there is something profound about the idea. Thinking in terms of possible lives integrates and unites many different fields of philosophical inquiry. It integrates concerns about probability and risk, technology and futurism, the philosophy of the emotions, and the tension between optimism and pessimism. It allows us to reconceive and approach all these debates under the same unifying perspective. That seems pretty insightful to me.
Friday, April 26, 2019
Who Should Explore Space: Robots or Humans?
Should humans explore the depths of space? Should we settle on Mars? Should we become a “multi-planetary species”? There is something in the ideal of human space exploration that stirs the soul, that speaks to a primal instinct, that plays upon the desire to explore and test ourselves to the limit. At the same time, there are practical reasons to want to take the giant leap. Space is filled with resources (energy, minerals etc) that we can utilise, and threats we must neutralise (solar flares, asteroids etc).
On previous occasions, I have looked at various arguments defending the view that we ought to explore space. Those arguments fall into three main categories: (i) intellectual arguments, i.e. ones that focus on the intellectual and epistemic benefits of exploring space and learning more about our place within it; (ii) utopian/spiritual arguments, i.e. ones that focus on the need to create a dynamic, open-ended and radically better future for humanity, both for moral and personal reasons; and (iii) existential risk arguments, i.e. ones that focus on the need to explore space to both prevent and avoid existential risks to humanity.
For the purposes of this article, let’s assume that these arguments are valid. In other words, let’s assume that they do indeed provide compelling reasons to explore space. Now, let’s ask the obvious follow-up question: does this mean that humans should be the ones doing the exploring? It is already the case that robots (broadly conceived) do most of the space exploration. There are a handful of humans who have made the trip. But since the end of the Apollo missions in the early 1970s, humans have not gone much further than low earth orbit. For the most part, humans sit back on earth and control the machines that do the hard work. Soon, given improvements in AI and autonomous robots, we may not do much controlling either. We may just sit back and observe.
Should this pattern continue? Is space exploration, like so many other things nowadays, something that is best left to the machines? In this article, I want try to answer that question. I do so with the help of an article written by Keith Abney entitled “Robots and Space Ethics”. As we will see, Abney thinks that, with one potentially significant exception, we really should leave space exploration to the machines. Indeed, we might be morally obligated to do so. I’m sympathetic to what Abney has to say, but I still hold some hope for human space exploration.
1. Robots do it Better: Against Human Space Exploration
Why should we favour robotic space exploration over human space exploration? As you might imagine, the case is easy to state: robots are better at it. They are less biologically vulnerable. They do not depend on oxygen, or food, or water, or a delicate symbiotic relationship with a group of specially-evolved microorganisms, for their survival. They are less at risk from exposure to harmful solar radiation; they are less at risk from infection from alien microgranisms (a major plot point in HG Wells’s famous novel War of the Worlds). In addition to this, and as Abney documents, there are several major health risks and psychological risks suffered by astronauts that can be avoided through the use of robotic explorers (though he notes that the small number of astronauts makes studies of these risks somewhat dubious).
This is not to say that robots have no vulnerabilities and cannot be damaged by space exploration. They obviously can. Several space probes have been damaged beyond repair trying to land on alien worlds. They have also been harmed by space debris and suffered irrevocable harm due to general wear and tear. However, the problems encountered by these space probes just serve to highlight the risk to humans. It’s bad enough that probes have been catastrophically damaged trying to land on Mars, but imagine if it was a crew of humans? The space shuttle fatalities were major tragedies. They sparked rounds of recrimination and investigation. We don't want a repeat. All of this makes human space exploration both high risk and high cost. If we grant that humans are morally significant in a way that robots are not, then the costs of human space exploration would seem to significantly outweigh the benefits.
But how does this reasoning stack up against the arguments in favour of space exploration? Let’s start with the intellectual argument. The foremost defender of this argument is probably Ian Crawford. Although Crawford grants that robots are central to space exploration nowadays, he suggests that human explorers have advantages over robotic explorers. In particular, he suggests that there are kinds of in-person observation and experimentation that would be possible if humans were on space missions that just aren’t possible at the moment with robots. He also argues, more interestingly in my opinion, that space exploration would enhance human art and culture by providing new sources of inspiration for human creativity, and would also enhance political and ethical thinking because of the need to deal with new challenges and forms of social relation (for full details, see my summary here).
Although Abney does not respond directly to Crawford’s argument, he makes some interesting points that could be construed as a response. First, he highlights the fact that speculations about the intellectual value of human space exploration risk ignoring the fact that robots are already the de facto means by which we acquire knowledge of space. In other words, they risk ignoring the fact that without them, we would not have been able to learn as much about space as we have. Why would we assume that this trend will not continue? Second, he argues that claims to the effect that humans might be better at certain kinds of scientific investigation are usually dependent on the current limitations of robotic technology. As robotic technology improves, it’s quite likely that robots will be able to perform the kinds of investigations that we currently believe are only possible with human beings. We already see this happening here on Earth with more advanced forms of AI and robotics; it stands to reason that these advanced forms of AI can be used for space exploration too.
The bottom line then is that if our reasons for going to space our largely intellectual — i.e. to learn more about the cosmos and our place within it — then robots are the way to go. That said, there is nothing in what Abney says that deals with Crawford’s point about the intellectual gains in artistic, ethical and political thought. To appreciate those gains, it seems like it would have to be humans, not robots, that do the exploration. Perhaps one could respond to this by saying that some of these gains (most obviously the artistic ones) could come from watching and learning from robotic space missions; or that these intellectual gains are too nebulous or vague (what counts as an artistic gain?) to carry much weight; or that they come with significant risks that outweigh any putative benefits. For example, Crawford is probably correct to suggest that space exploration will prompt new ethical thinking, but that may largely be because it is so risky. Should we want to expose ourselves to those risks just so that philosophers can get their teeth into some new ethical dilemmas?
Let’s turn next to the more spiritual/utopian argument for space exploration. That argument focuses on the appeal of space exploration to the human spirit and the role that it could play in opening up the possibility of a dynamic and radically better future. Instead of being consigned to Earth, to tend the museum of human history (to co-opt Francis Fukuyama’s evocative phrase), we can forge a new future in space. We can expand the frontiers of human possibility.
This argument, much more so than the intellectual argument, seems to necessitate human participation in space exploration. Abney almost concedes as much in his analysis, but makes a few interesting points by way of response. First, he suggests that the appeal to the human spirit could be addressed by space 'tourism' and not space 'exploration'. In other words, we could look on human space travel as a kind of luxury good, and not something that we need to invest a lot of public money in. The public money, if it should go anywhere, should go to robotic space exploration only. Second, and relatedly, given the high cost of human space travel, any decision to invest money in it would have to factor in the significant opportunity cost of that investment. In other words, it would have to acknowledge that there are other, better, causes in which to invest. It would, consequently, be difficult to morally justify the investment. Third, he argues that, to the extent that human participation is deemed desirable, we should participate remotely, through immersive VR. This would be a lower cost and lower risk way for vulnerable beings like us to explore the further reaches of space.
I find this last suggestion intriguing. I imagine the idea is that we can satisfy our lust for visiting alien worlds or travelling to distant galaxies by using robotic avatars. We can hook ourselves up to these avatars using VR headsets and haptics, and really immerse ourselves in the space environment at minimal risk to our health and well-being. I agree that this would be a good way to do it, if it were feasible. That said, the technical challenges could be formidable. In particular, I think the time-lag between sending and receiving a signal between yourself and your robotic avatar would make it practically unwieldy. In the end, we might end up with little more than an immersive but largely passive space simulator. That doesn’t seem all that exciting.
2. The Interstellar Doomsday Argument
I mentioned at the outset that despite favouring robotic space exploration, Abney does think that there is one case in which human exploration might be morally compelling, namely: to avoid existential risk.
To be clear, Abney argues that robots can help us to mitigate many existential risks. For example, we could use autonomous robots to monitor and neutralise potential asteroid impacts, or to reengineer the climate in order to mitigate climate change. Nevertheless, he accepts that there is always the chance that these robotic efforts might fail (e.g. a rogue asteroid might leak through our planetary defence system) and Earth might get destroyed. What then? Well, if we had a human colony on another planet (or on an interstellar spaceship) there would be a chance of long-term human survival. Granting that we have a moral duty not to prevent the destruction of our species, it consequently seems to follow that we have a duty to invest in at least some human space exploration.
What’s more, Abney argues that we may have to do this sooner rather than later. This is where he makes his most interesting argument, something he calls the “Interstellar Doomsday Argument”. This argument applies the now-classic probability argument for “Doom Soon” to our thinking about the need for interstellar space exploration. This argument takes a bit of effort to understand, but it is worth it.
The classic Doomsday Argument, defended first by John Leslie and then championed by Nick Bostrom and others, claims that human extinction might be much closer in the future than we think. The argument works from some plausible initial assumptions and then applies to those assumptions some basic principles drawn from probability theory. I’m not going to explain the full thing (there are some excellent online primers about it, if you are interested) but I will give the gist of it. The idea is that, if you have no other background knowledge to tell you otherwise, you should assume that you are a randomly distributed member of the total number of humans that will ever live (this is the Copernican assumption or "self-sampling assumption"). You should also assume, if you have no background knowledge to tell you otherwise, that the distribution of the total number of humans that will ever live will follow a normal pattern. From this, you can conclude that you are highly unlikely to be at the extreme ends of the distribution (i.e. very near the start of the sequence of all humans; or very near the end). You can also conclude that there is highly probable upper limit on the total number of people who will ever live. If you play around with some of the background knowledge about the total human population to date and its distribution, you can generate reasonably pessimistic conclusions about how soon human extinction is likely to be.
That’s the gist of the original Doomsday Argument. Abney uses a variant on it, first set out by John Richard Gott in a paper in the journal Nature. Gott’s argument, using the standard tools of probability theory, applies to the observation of all temporally distributed phenomena, not just one’s distribution within the total population of humans who will ever live. The argument (called the “Delta t” argument) states that:
Gott’s Delta t Argument “[I]f there is nothing special about one’s observation of a phenomenon, one should expect a 95% probability that the phenomenon will continue for between 1/39 times and 39 times its present duration, as there’s only a 5% possibility that your random observation comes in the first 2.5% of its lifetime, or the last 2.5%”
(Abney 2017, 364).
Gott originally used his argument to make predictions about how long the Berlin Wall was likely to stand (given the point in time at which he visited it), and how long a Broadway show was likely to remain open (give the point in time at which he watched it). Abney uses the argument to make predictions about how long humanity is likely to last as an interstellar species.
Abney starts with the observation that humanity first became an interstellar species sometime in August 2012. That was when the Voyager 1 probe (first launched in the 1970s) exited our solar system and entered interstellar space. Approximately seven years have elapsed since then (I’m writing this in 2019). Assuming that there is nothing special about the point in time at which I am “observing” Voyager 1’s interstellar journey, we can apply the Delta t argument and conclude that humanity’s status as an interstellar species is likely to last between (1/39 x 7 years) and (39 x 7 years). That means that there is a 95% chance that we have only got between 66 days and 273 years left of interstellar existence.
That should be somewhat alarming. It means that we don’t have as long we might think to escape our planet and address the existential risks of staying put. In fact, the conclusion becomes more compelling (and more alarming) if we combine the Doomsday argument with thoughts about the Great Silence and the Great Filter.
The Great Silence is the concern, first set out by Enrico Fermi, about the apparent absence of intelligent alien life in our galaxy. Fermi’s point was that if there is intelligent life out there, we would expect to have heard something from it by now. The universe is a big place but it has existed for a long time and if an intelligent species has any desire to explore it, it would have had ample time to do so by now. This has since been confirmed by calculations showing that if an intelligent species used robotic probes to explore the universe (specifically it used self-replicating Von Neumann probes) then it would only take a few hundred million years to ensure that every solar system had at least one such probe in it.
The Great Filter is the concern, first set out by Robin Hanson, about what it is that prevents intelligent species from exploring the universe and making contact with us. Working off Fermi’s worries about the Great Silence, Hanson argued that if intelligent life has not made contact with us yet (or left some sign or indication of its existence) then it must be because there is some force that prevents it from doing so. Either species tend not to evolve to the point that their intelligence enables them to explore space, or they destroy themselves when they reach a point of technological sophistication, or they just don’t last very long when they reach the interstellar phase (there are other possibilities too).
Whatever the explanation of the Great Silence and the Great Filter, the fact that there do not appear to be other interstellar species and we do not know why, should give us reason to think that our current interstellar status will be short-lived. That might tip the balance in favour of human space exploration.
Before closing, it is worth noting that Doomsday reasoning of the sort favoured by Abney is not without its critics. Several people have challenged and refined Gott’s argument of the years, and Olle Häggström argued that the Doomsday argument is fallacious, and an unfortunate blight on futurist thinking, in his 2016 book Here be Dragons.
Thursday, April 25, 2019
#58 - Neely on Augmented Reality, Ethics and Property Rights
erica neely
In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and augmented reality. We chat about the ethics of augmented reality, with a particular focus on property rights and the problems that arise when we blend virtual and physical reality together in augmented reality platforms.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other services (the RSS feed is here).
Show Notes
• 0:00 - Introduction
• 1:00 - What is augmented reality (AR)?
• 5:55 - Is augmented reality overhyped?
• 10:36 - What are property rights?
• 14:22 - Justice and autonomy in the protection of property rights
• 16:47 - Are we comfortable with property rights over virtual spaces/objects?
• 22:30 - The blending problem: why augmented reality poses a unique problem for the protection of property rights
• 27:00 - The different modalities of augmented reality: single-sphere or multi-sphere?
• 30:45 - Scenario 1: Single-sphere AR with private property
• 34:28 - Scenario 2: Multi-sphere AR with private property
• 37:30 - Other ethical problems in scenario 2
• 43:25 - Augmented reality vs imagination
• 47:15 - Public property as contested space
• 49:38 - Scenario 3: Multi-sphere AR with public property
• 54:30 - Scenario 4: Single-sphere AR with public property
• 1:00:28 - Must the owner of the single-sphere AR platform be regulated as a public utility/entity?
• 1:02:25 - Other important ethical issues that arise from the use of AR
Relevant Links
Sunday, April 21, 2019
Understanding Hume on Miracles (Audio Essay)
This audio essay is an Easter special. It focuses on David Hume's famous argument about miracles. First written over 250 years, Hume's essay 'Of Miracles' purports to provide an "everlasting check" against all kinds of "superstitious delusion". But is this true? Does Hume give us good reason to reject the testimonial proof provided on behalf of historical miracles? Maybe not, but he certainly provides a valuable framework for thinking critically about this issue.
You can download the audio here or listen below. You can also subscribe on Apple, Stitcher and a variety of other podcatching services (the RSS feed is here).
This audio essay is based on an earlier written essay (available here). If you are interested in further reading about the topic, I recommend the following essays:
Friday, April 19, 2019
The Ethics of Designing People: The Habermasian Critique
Suppose in the not-too-distant future we master the art of creating people. In other words, technology advances to the point that you and I can walk into a store (or go online!) and order a new artificial person from a retailer. This artificial person will be a full-blown person in the proper philosophical sense of the term “person”. They will have all the attributes we usually ascribe to a human person. They will have the capacity to suffer, to think rationally, to desire certain futures, to conceive of themselves as a single coherent self and so on. Furthermore, you and I will have the power to design this person according to our own specifications. We will be able to pick their eye colour, height, hairstyle, personality, intelligence, life preferences and more. We will be able completely customise them to our tastes. Here’s the question: would it be ethical for us to make use of this power?
Note that for the purposes of this thought experiment it doesn’t matter too much what the artificial person is made of. It could be a wholly biological entity, made from the same stuff as any human child, but genetically and biomedically engineered according to our customisation. Or it could also be wholly artificial, made from silicon chips and motorised bits, a bit like Data from Star Trek. None of this matters. What matters is that (a) it is a person and (b) it has been custom built to order. It is ethical to create such a being?
Some people think it wouldn’t be; some people think it would be. In this post I want to look at the arguments made by those who think it would be a bad idea to design a person from scratch in this fashion. In particular I want to look at a style of argument made popular by the German philosopher Jurgen Habermas in his critique of positive eugenics. According to this argument, you should not design a person because doing so would necessarily compromise the autonomy and equality of that person. It would turn them into a product not a person; an object not a subject.
Although this argument is Habermasian in origin, I’m not going to examine Habermas’s version of it. Instead, I’m going to look at a version of it that is presented by the Polish philosopher Maciej Musial in his article “Designing (artificial) people to serve - the other side of the coin”. This is an interesting article, one that responds to an argument from Steve Petersen claiming that it would be permissible to create an artificial person who served your needs in some way. I’ve covered Petersen’s argument before on this blog (many moons ago). Some of what Musial says about Petersen’s argument has merit to it, but I want to skirt around the topic of designing robot servants (who are still persons) and focus on the more general idea of creating persons.
1. Clarifying the Issue: The “No Difference” Argument
To understand Musial’s argument, we have to understand some of the dialectical context in which it is presented. As mentioned, it is a response to Steve Petersen’s claim that it is okay to create robot persons that serve our needs. Without going into all the details of Petersen’s argument, one of the claims that Petersen makes while defending this view is that there is no important difference between programming or designing an artificial person to really want to do something and having such a person come into existence through a process of natural biological conception and socialisation.
Why is that? Petersen makes a couple of points. First, he suggested that there is no real difference between being born by natural biological means and being programmed/designed by artificial means. Both processes entail a type of programming. In the former case, evolution by natural selection has “programmed” us, indirectly and over a long period of time, with a certain biological nature; in the latter case, the programming is more immediate and direct, but it is fundamentally the same thing. This analogy is not ridiculous. Some people — notably Daniel Dennett in his book Darwin’s Dangerous Idea — have argued that evolution is an algorithmic process, very much akin to computer programming, that designs us to serve certain evolutionary ends; and, furthermore, evolutionary algorithms are now a common design strategy in computer programming.
The other point Petersen makes is that there is no real difference between being raised by one’s parents and being intentionally designed by them. Both processes have goals and intentions behind them. Parents often want to raise their children in a particular way. For example, some parents want their children to share their religious beliefs, to follow very specific career paths, and to have the success that they never had. They will take concrete steps to ensure that this is the case, bringing their children to church every week, giving them the best possible education, and (say) training them in the family business. These methods of steering a child’s future have their limitations, and might be a bit haphazard, but they do involve intentional design (even if parents deny this). All Petersen is imagining is that different methods, aimed at the same outcome, become available. Since both methods have the same purpose, how could they be ethically different?
To put this argument in more formal terms:
• (1) If there is no important difference between (i) biologically conceiving and raising a natural person and (ii) designing and programming an artificial person, then one cannot object to the creation of an artificial person on the grounds that it involves designing and programming them in particular ways.
• (2) There is no important difference between (i) and (ii) (following the arguments just given)
• (3) Therefore, one cannot object to the creation of artificial persons on the grounds that it involves designing and programming them in particular ways.
To be clear, there are many other ethical objections that might arise in relation to the creation of artificial persons. Maybe it would be too expensive? Maybe their presence would have unwelcome consequences for society? Some of these are addressed in Petersen’s original article and Musial’s response. I am not going to get into them here. I am solely interested in this “no difference” argument.
2. The Habermasian Response: There is a difference
The Habermasian response to this argument takes aim at premise (2). It rests on the belief that there are several crucial ethical differences between the two processes. Musial develops this idea by focusing in particular on how being designed changes one’s relationship with oneself, one’s creators, and the rest of society.
Before we look at his specific claims it is worth reflecting for a moment on the kinds of differences he needs to pinpoint in order to undermine the “no difference”-argument. It’s not just any difference that will do. After all, the processes are clearly different in many ways. For example, one thing that people often point to is that biological conception and parental socialisation are somewhat contingent and haphazard processes over which parents have little control. In other words, parents may desire that their children turn out a particular way, but they cannot guarantee that this will happen. They have to play the genetic and developmental lottery (indeed, there is even a well-known line of research suggesting that beyond genetics parents contribute little to the ultimate success and happiness of their children).
That’s certainly a difference, but it is not the kind of difference you need to undermine the “no difference” argument. Why not? Because it is not clear what its ethical significance is. Does a lack of control make one process more ethically acceptable than another? On the face of it, it’s not obvious that it does. If anything, one might suspect the ethical acceptability runs in the opposite direction. Surely it is ethically reckless to just run the genetic and developmental lottery and hope that everything turns out for the best? For contingency and lack of control to work to undermine the “no difference” argument it will need to be shown that they translate into some other ethically relevant difference. Do they?
Musial highlights two potentially relevant differences that they might translate into in his article. The first has to do with the effects of being designed and programmed on a person’s sense of autonomy. The gist of this argument is that if one person (or a group of persons) designs another person to have certain capacities or to serve certain ends, then that other person cannot really be the autonomous author of their own lives. They must live up to someone else’s expectations and demands.
Of course, someone like Petersen would jump back in at this point and say that this can happen anyway with traditional parental education and socialisation. Parents can impose their own expectations and demands on their children and their children can feel a lack of autonomy as a result. Despite this, we don’t think that traditional parenting is ethically impermissible (though I will come back to this issue again below).
But Musial argues that this does not compare like with like. The expectations and demands of traditional parenting usually arise after the child has “entered the world of intersubjective dialogue”. In other words, a natural child can at least express its own wishes and make its feelings known in response to parental education and socialisation. It can reject the parental expectations if it wishes (even if that makes its life difficult in other ways). Similarly, even if the child does go along with the parental expectations, it can learn to desire the things the parent’s desire for it and to achieve the things they wish it to achieve. This is very different from having those desires and expectations pre-programmed into the child before it is born through genetic manipulation or biomedical engineering. It is much harder to reject those pre-programmed expectations because of the way in which they are hardwired in.
It might be disputed at this juncture that even biological children will have some genetic endowments that they do not like and are hard to reject. For example, I am shorter than I would like to be. I am sure this is as a result of parental genetics. I don’t hold it against them or question my autonomy as a result. But Musial argues that my frustration with being shorter than I would like to be is different from the frustration that might be experienced by someone who is deliberately designed to be a particular height. In my case, it is not that my parent’s imposed a particular height expectation on me. They just rolled the genetic dice. In the case of someone who is designed to be a particular height, they can trace that height back to a specific parental intention. They know they are living up to someone else’s expectations in a way that I do not.
Musial’s second argument has to do with equality. The claim is that being designed and programmed to serve a particular aim (or set of aims) undermines an egalitarian ethos. Egalitarianism (i.e. the belief that all human beings are morally equal) can only thrive in a world of contingency. Indeed, in the original Habermasian presentation, the claim was that contingency is a “necessary presupposition” of egalitarian interpersonal relationships. This is because if one person has designed another there is a dependency relationship between them. The designee knows that they have been created at the whim of the designer and are supposed to serve the ends of the designer. There is a necessary and unavoidable asymmetry between them. Not only that, but the designee will also know themselves to be different from all other non-designed persons.
Musial argues that the inequality that results from the design process can be both normative and empirical in nature. In other words, the designee may be designated as normatively inferior to other people because they have been created to serve a particular end (and so do not have the open-ended freedom of everyone else); and the designee may just feel themselves to be inferior because they know they have been intended to serve an end, or may be treated as inferior by everyone else. Either way, egalitarianism suffers.
One potential objection to this line of thought would be to argue that the position of the designee in this brave new world of artificial persons is not that different from the position of all human beings under traditional theistic worldviews. Under theism, the assumption is usually that we are all designed by God. Isn’t there are necessary relationship of inequality as a result? Without getting into the theological weeds, this may indeed be true but even still there is a critical difference between being a designee under traditional theism and being a designee in the circumstances being envisaged by Musial and others. Under theism, all human persons are designees and so all share in the same unequal status with respect to the designer. That’s different from a world in which some people are designed by specific others to serve specific ends and some are not. In any event, this point will only be relevant to someone who believes in traditional theism.
3. Problems with the Habermasian Critique
That’s the essence of the Habermas/Musial critique of the no difference argument. Is it any good? I have a two major concerns.
The first is a general philosophical one. It has to do with the coherence of individual autonomy and freedom. One could write entire treatises on both of these concepts and still barely scratch the surface of the philosophical debate about them. Nevertheless, I worry that the Habermas/Musial argument depends on some dubious, and borderline mysterian, thinking about the differences between natural and artificial processes and their effect on autonomy. In his presentation of the argument, Musial concedes that natural forces do, to some extent, impact on our autonomy. In other words, our desires, preferences and attitudes are shaped by forces beyond our control. Still, following Habermas, he claims that “natural growth conditions” allow us to be self-authors in a way that artificial design processes do not.
I’ll dispute the second half of this claim in a moment but for now I want dwell on the first half. Is it really true that natural growth conditions allow us to be self-authors? Maybe if you believe in contra-causal free will (and if you believe this is somehow absent in created persons). But if you don’t, then it is hard to see how can this be true if it is conceded that external forces, including biological evolution and cultural indoctrination, have a significant impact on our aptitudes, desires and expectations. It may be true that under natural growth conditions you cannot identify a single person who has designed you to be a particular way or to serve a particular end — the causal feedback loops are a bit too messy for that — but that doesn’t make the desires that you have more authentically yours as a result. Just because you can pinpoint the exact external cause of a belief or desire in one case, but not in the other, it does not mean that you have greater self-authorship in the latter. You have an illusion of self-authorship, nothing more. Once that illusion is revealed to you, how it is anymore existentially reassuring than learning that you were intentionally designed to be a particular way? If anything, we might suspect that latter would be more existentially reassuring. At least you know that you are not the way you are due to blind chance and dumb luck (in this respect it might be worth noting that a traditional goal of psychoanalytic therapy was to uncover the deep developmental and non-self determined causes of your personal traits and foibles). Furthermore, in either case, it seems to me that the illusion of autonomy could be sustained despite the knowledge of external causal influences. This would be true if, even having learned of the illusion, you still have the capacity for rational thought and the capacity to learn from your experiences.
This brings me to the second concern, which is more important. It has to do with the intended object or goal behind the intentional design of an artificial person. Notwithstanding my concerns about the nature of autonomy, I think the Habermas/Musial argument does provide reason to worry about the ethics of creating people to serve very specific ends. In other words, I would concede that it might be questionable to create, say, an artificial person who has been designed and programmed to really want to do your ironing. If that person is a genuine person — i.e. has the cognitive and emotional capacities we usually associate with personhood — then it might be disconcerting for them to learn that they were designed for this purpose, and it might impact on their sense of autonomy and equality if they are.
But this is only because they have been designed to serve a very specific end. If the goal of the designer/programmer is not to create a person to serve a specific end but, rather, to design someone who has enhanced capacities for autonomous thought, then the problem goes away. In that case, the artificial person would probably be customised to have greater intelligence, learning capacity, foresight, and imagination than a natural born person, but there would be no specific end that they are intended to serve. In other words, the designer would not be trying to create someone who could to the ironing but, rather, someone who could live a rich and flourishing life, whatever they decide for themselves. I’m not a parent (yet) myself, but I imagine that this should really be the goal of ethical parenting: not to raise the next chess champion (or whatever) but to raise someone who has the capacity to decide what the good life should be for themselves. Whether that is done through traditional parenting, or through design and programming, strikes me as irrelevant.
I would add to this that the Habermas/Musial argument, even in the case of a person who has been designed to serve a specific end, only works on the assumption that the specific end that the person has been designed to serve is hard to reject after they learn that they have been designed to serve that end. But it is not obvious to me that this would be the case. If we have the technology that enables us to specifically design artificial people from birth, it seems likely that we would also have the technology to reprogram them in the middle of life too. Consequently, someone who learns that they have been designed to serve a particular end could easily reject that end by having themselves reprogrammed. It’s only if you assume that this power is absent, or that designers exert continued control over the lives of the designees, that the tragedy of being designed might continue.
It could be argued, in response to this, that if you are not designing an artificial person to serve a specific end, then then there is no point in creating them. Musial raises this as a worry at the end of his article, when he suggests that the only ethical way to create an artificial person is to not specific any of their features. But I think this is wrong. You can specify some of their features without specifying that they serve a specific end, and if you are worried about the ethics of creating such a person that does not serve a specific end you may as well ask: what’s the point of creating natural persons if they don’t serve any particular ends? There are many reasons to do so. In my paper “Why we should create artificial offspring”, I argued that we might want to create artificial people in order to secure a longer collective afterlife, and because doing so would add value to our lives. That’s at least one reason.
This is not to say there are no problems with creating artificially designed persons. For example, I think creating an artificially enhanced person (i.e. one with capacities that exceed those of most ordinary human beings) could be problematic from an egalitarian perspective. This is not because the designee would be in an inferior position to the non-designed but rather because the non-designed might perceive themselves to be at a disadvantage relative the designee. This has been a long-standing concern in the enhancement debate. But worrying about that takes us beyond the Habermasian critique and is a something to address another day.
Friday, April 12, 2019
The Argument for Medical Nihilism
Suppose you have just been diagnosed with a rare illness. You go to your doctor and they put you through a series of tests. In the end, they recommend that you take a new drug — wonderzene — that has recently been approved by the FDA following several successful trials. How confident should you be that this drug will improve your condition?
You might think that this question cannot be answered in the abstract. It has to be assessed on a case by case basis. What is the survival rate for your particular illness? What is its underlying pathophysiology? What does the drug do? How successful were these trials? And in many ways you would be right. Your confidence in the success of the treatment does depend on the empirical facts. But that’s not all it depends on. It also depends on assumptions that medical scientists make about the nature of your illness and on the institutional framework in which the scientific evidence concerning the illness and its treatment is produced, interpreted and communicated to patients like you. When you think about these other aspects of the medical scientific process, it might be the case that you should very sceptical about the prospects of your treatment being a success. This could be true irrespective of the exact nature of the drug in question and the evidence concerning its effectiveness.
That is the gist of the argument put forward by Jacob Stegenga in his provocative book Medical Nihilism. The book argues for an extreme form of scepticism about the effectiveness of medical interventions, specifically pharmaceutical interventions (although Stegenga intends his thesis to have broader significance). The book is a real tour-de-force in applied philosophy, examining in detail the methods and practices of modern medical science and highlighting their many flaws. It is eye-opening and disheartening, though not particularly surprising to anyone who has been paying attention to the major scandals in scientific research for the past 20 years.
I highly recommend reading the book itself. In this post I want to try to provide a condensed summary of its main argument. I do so partly to help myself understand the argument, and partly to provide a useful primer to the book for those who have not read it. I hope that reading it stimulates further interest in the topic.
1. The Master Argument for Medical Nihilism
Let’s start by clarifying the central thesis. What exactly is medical nihilism? As Stegenga notes in his introductory chapter, “nihilism” is usually associated with the view that “some particular kind of value, abstract good, or form of meaning” does not exist (Stegenga 2018, 6). Nihilism comes in both metaphysical and epistemological flavours. In other words, it can be understood as the claim that some kind of value genuinely does not exist (the metaphysical thesis) or that it is impossible to know/justify one’s belief in its existence (the epistemological thesis).
In the medical context, nihilism can be understood relative to the overarching goals of medicine. These goals are to eliminate both the symptoms of disease and, hopefully, the underlying causes of disease. Medical nihilism is then the view that this is (very often) not possible and that it is very difficult to justify our confidence in the effectiveness of medical interventions with respect to those goals. For what it’s worth, I think that the term ‘nihilism’ oversells the argument that Stegenga offers. I don’t think he quite justifies total nihilism with respect to medical interventions; though he does justify strong scepticism. That said, Stegenga uses the term nihilism to align himself with 19th century medical sceptics who adopted a view known as ‘therapeutic nihilism’ which is somewhat similar to the view Stegenga defends.
Stegenga couches the argument for medical nihilism in Bayesian terms. If that’s something that is unfamiliar to you, then I recommend reading one of the many excellent online tutorials on Bayes’ Theorem. Very roughly, Bayes’ Theorem is a mathematical formula for calculating the posterior probability of a hypothesis or theory (H) given some evidence (E). Or, to put it another way, it is a formula for calculating how confident you should be in a hypothesis given that you have received some evidence that appears to speak in its favour (or not, as the case may be). This probability can be written as P(H|E) — which reads in English as “the probability of H given E”. There is a formal derivation of Bayes’ Theorem that I will not go through. For present purposes, it suffices to know that the P(H|E) depends on three other probabilities: (i) the prior probability of the hypothesis being true, irrespective of the evidence (i.e P(H)); (ii) the probability (aka the “likelihood”) of the evidence given the hypothesis (i.e. P(E|H); and (iii) the prior probability of the evidence, irrespective of the hypothesis (i.e. P(E)). This can be written out as an equation, as follows:
P(H|E) = P(H) x P(E|H) / P(E)*
In English, this equation states that the probability of the hypothesis given the evidence is equal to the prior probability of the hypothesis, multiplied by the probability of the evidence given the hypothesis, divided by the prior probability of the evidence.
This equation is critical to understanding Stegenga’s argument because, without knowing any actual figures for the relevant probabilities, you know from the equation itself that the P(H|E) must be low if the following three conditions are met: (i) the P(H) is low (i.e. if it is very unlikely, irrespective of the evidence, that the hypothesis is true); (ii) the P(E|H) is low (i.e. the evidence observed is not very probable given the hypothesis); and (iii) the P(E) is high (i.e. it is very likely that you would observe the evidence irrespective of whether the hypothesis was true or not). To confirm this, just plug figures into the equation and see for yourself.
That’s all the background on Bayes’ theorem that you need to understand Stegenga’s case for medical nihilism. In Stegenga’s case, the hypothesis (H) in which we are interested is the claim that any particular medical intervention is effective, and the evidence (E) in which we are interested is anything that speaks in favour of that hypothesis. So, in other words, we are trying to figure out how confident we should be about the claim that the intervention is effective given that we have been presented with evidence that appears to support its effectiveness. We calculate that using Bayes’ theorem and we know from the preceding discussion that our confidence should be very low if the three conditions outlined above are met. These three conditions thus form the premises of the following formal argument in favour of medical nihilism.
• (1) P(H) is low (i.e. the prior probability of any particular medical intervention being effective is low)
• (2) P(E|H) is low (i.e. the evidence observed is unlikely given the hypothesis that the medical intervention is effective)
• (3) P(E) is high (i.e. the prior probability of observing evidence that favours the treatment, irrespective of whether the treatment is actually effective, is high)
• (4) Therefore (by Bayes’ theorem) the P(H|E) must be low (i.e. the posterior probability of the medical intervention being successful, given evidence that appears to favour it, is low)
The bulk of Stegenga’s book is dedicated to defending the three premises of this argument. He dedicates most attention to defending premise (3), but the others are not neglected. Let’s go through each of them now in more detail. Doing so should help to eliminate lingering confusions you might have about this abstract presentation of the argument.
2. Defending the First Premise: The P(H) is Low
Stegenga offers two arguments in support of the claim that medical interventions have a low prior probability of success. The first argument is relatively straightforward. We can call it the argument from historical failure. This argument is an inductive inference from the fact that most historical medical interventions are unsuccessful. Stegenga gives many examples. Classic ones would include the use of bloodletting and mercury to cure many illnesses, “hydropathy, tartar emetic, strychnine, opium, jalap, Daffy’s elixir, Turlington’s Balsam of life” and many more treatments that were once in vogue but have now been abandoned (Stegenga 2018, 169).
Of course, the problem with focusing on historical examples of this sort is that they are often dismissed by proponents of the “standard narrative of medical science”. This narrative runs like this “once upon a time, it is true, that most medical interventions were worse than useless, but then, sometime in the 1800s, we discovered scientific methods and things started to improve”. This is taken to mean that you can’t use these historical examples to question the prior probability of modern medical treatments.
Fortunately, you don’t need to. Even in the modern era most putative medical treatments are failures. Drug companies try out many more treatments than ever come to market, and among those that do come to market, a large number end up being withdrawn or restricted due to their relative uselessness or, in some famous cases, outright dangerousness. Stegenga gives dozens of examples on pages 170-171 of his book. I won’t list them all here but I will give a quick flavour of them (if you click on the links, you can learn more about the individual cases). The examples of withdrawn or restricted drugs include: isotretinoin, rosiglitazone, valdecoxib, fenfluramine, sibutramine, rofecoxib, cerivastatin, and nefazodone. The example of rofecoxib (marketed as Vioxx) is particularly interesting. It is a pain relief drug, usually prescribed for arthritis, that was approved in 1999 but then withdrawn due to associations with increased risk of heart attack and stroke. It was prescribed to more than 80 million people when it was on the market (there is some attempt to return it to market now). And, again, that it just one example among many. Other prominent medical failures include monoamine oxidase inhibitors, which were widely prescribed for depression in the mid-20th century, only later to be abandoned due to ineffectiveness, and hormone replacement therapy (HRT) for menopausal women.
These many examples of past medical failure, even in the modern era, suggest that it would be wise to assign a low prior probability to the success of any new treatment. That said, Stegenga admits that this is a suggestive argument only since it is very difficult to give an accurate statement of the ratio of effective to ineffective treatments from this data (one reason for this is that it is difficult to get a complete dataset and the dataset that we do have is subject to flux, i.e. there are several treatments that are still on the market that may soon be withdrawn due to ineffectiveness or harmfulness).
Stegenga’s second argument for assigning a low prior probability to H is more conceptual and theoretical in a nature. It is the argument from the paucity of magic bullets. Stegenga’s book isn’t entirely pessimistic. He readily concedes that some medical treatments have been spectacular successes. These include the use of antibiotics and vaccines for the treatment of infectious diseases and the use of insulin for diabetic treatment. One property shared by these successful treatments is that they tend to be ‘magic bullets’ (the term comes from the chemist Paul Ehrlich). What this means is that they target a very specific cause of disease (e.g. virus or bacteria) in an effective way (i.e. they can eliminate/destroy the specific cause of disease without many side effects).
Magic bullets are great, if we can find them. The problem is that most medical interventions are not magic bullets. There are three reasons for this. First, magic bullets are the “low-hanging fruit” of medical science: we have probably discovered most of them by now and so we are unlikely to find new ones. Second, many of the illnesses that we want to treat have complex, and poorly understood, underlying causal mechanisms. Psychiatric illnesses are a classic example. Psychiatric illnesses are really just clusters of symptoms. There is very little agreement on their underlying causal mechanisms (though there are lots of theories). It is consequently difficult to create a medical intervention that specifically and effectively targets a psychiatric disease. This is equally true for other cases where the underlying mechanism is complex or unclear. Third, even if the disease were relatively simple in nature, human physiology is not, and the tools that we have at our disposal for intervening into human physiology are often crude and non-specific. As a result, any putative intervention might mess up the delicate chemical balancing act inside the body, with deleterious side effects. Chemotherapy is a clear example. It helps to kill cancerous cells but in the process it also kills healthy cells. This often results in very poor health outcomes for patients.
Stegenga dedicates an entire chapter of his book to this argument (chapter 4) and gives some detailed illustrations of the kinds of interventions that are at our disposal and how non-specific they often are. Hopefully, my summary suffices for getting the gist of the argument. The idea is that we should assign a low prior probability to the success of any particular treatment because it is very unlikely that the treatment is a magic bullet.
3. Defending the Second Premise: The P(E|H) is Low
The second premise claims that the evidence we tend to observe concerning medical interventions is not very likely given the hypothesis that they are successful. For me, this might be the weakest link in the argument. That may be because I have trouble understanding exactly what Stegenga is getting at, but I’ll try to explain how I think about it and you can judge for yourself whether it undermines the argument.
My big issue is that this premise, more so than the other premises, seems like one that can really only be determined on a case-by-case basis. Whether a given bit of evidence is likely given a certain hypothesis depends on what the evidence is (and what the hypothesis is). Consider the following three facts: the fact that you are wet when you come inside the house: the fact that you were carrying an umbrella with you when you did; and the fact that you complained about the rain when you spoke to me. These three facts are all pretty likely given the hypothesis that it is raining outside (i.e. the P(E|H) is high). The facts are, of course, consistent with other hypotheses (e.g. that you are a liar/prankster and that you dumped a bucket of water over your head before you came in the door) but that possibility, in and of itself, doesn’t mean the likelihood of observing the evidence that was observed, given the hypothesis that it is raining outside, is low. It seems like the magnitude of the likelihood depends specifically on the evidence observed and how consistent it is with the hypothesis. In our case, we are assuming that the hypothesis is the generic statement that the medical intervention is effective, so before we can say anything about the P(E|H) we would really need to know what the evidence in question is. In other words, it seems to me like we would have to “wait and see” what the evidence is before concluding that the likelihood is low. Otherwise we might be conflating the prior probability of an effective treatment (which I agree is low) with the likelihood.
Stegenga’s argument seems to be that we can say something generic about the likelihood given what we know about the evidential basis for existing interventions. He makes two arguments in particular about this. First, he argues that in many cases the best available medical evidence suggests that many interventions are little better than placebo when it comes to ameliorating disease. In other words, patients who take an intervention usually do little better than those who take a placebo. This is an acknowledged problem in medicine, sometimes referred to as medicine’s “darkest secret”. He gives detailed examples of this on pages 171 to 175 of the book. For instance, the best available evidence concerning the effectiveness of anti-depressants and cholesterol-lowering drugs (statins) suggests they have minimal positive effects. That is not the kind of evidence we would expect to see on the hypothesis that the treatments are effective.
The second argument he makes is about discordant evidence. He points out that in many cases the evidence for the effectiveness of existing treatments is a mixed bag: some high quality studies suggest positive (if minimal) effects; others suggest there is no effect; and others suggest that there is a negative effect. Again, this is not the kind of evidence we would expect to see if the intervention is effective. If the intervention were truly effective, surely there would be a pronounced positive bias in the total set of evidence? Stegenga goes into some of the technical reasons why this argument from discordant evidence is correct, but we don’t need to do that here. This description of the problem should suffice.
I agree with both of Stegenga’s arguments, but I still have qualms about his general claim that the P(E|H) for any particular medical intervention is low. Why is this? Let’s see if I can set it out more clearly. I believe that Stegenga succeeds in showing that the evidence we do observe concerning specific existing treatments is not particularly likely given the hypothesis that those treatments are effective. That’s pretty irrefutable given the examples discussed in his book. But as I understand it, the argument for medical nihilism is a general one that is supposed to apply to any random or novel medical treatment, not a specific one concerning particular medical treatments. Consequently, I don’t see why the fact that the evidence we observe concerning specific treatments is unlikely generalises to an equivalent assumption about any random or novel treatment.
That said, my grasp of probability theory leaves a lot to be desired so I may have this completely wrong. Furthermore, even if I am right, I don’t think it undermines the argument for medical nihilism all that much. The claims that Stegenga defends about the evidential basis of existing treatments can be folded into how we calculate the prior probability of any random or novel medical treatment being successful. And it would certainly lower that prior probability.
4. Defending the Third Premise: The P(E) is High
This is undoubtedly the most interesting premise of Stegenga’s argument and the one he dedicates the most attention to in his book (essentially all of chapters 5-10). I’m not going to be able to do justice to his defence of it here. All I can provide is a very brief overview. Still, I will try my best to capture the logic of the argument he makes.
To start, it helps if we clarify what this premise is stating. It is stating that we should expect to see evidence suggesting that an intervention is effective even if the intervention is not effective. In other words, it is stating that the institutional framework through which medical evidence is produced and communicated is such that there is a significant bias in favour of positive evidence, irrespective of the actual effectiveness of a treatment. To defend this claim Stegenga needs to show that there is something rotten at the heart of medical research.
The plausibility of that claim will be obvious to anyone who has been following the debates about the reproducibility crisis in medical science in the past decade, and to anyone who has been researching the many reports of fraud and bias in medical research. Still, it is worth setting out the methodological problems in general terms, and Stegenga’s presentation of them is one of the better ones.
Stegenga makes two points. The first is that the methods of medical science are highly malleable; the second is that the incentive structure of medical science is such that people are inclined to take advantage of this malleability in a way that produces evidence of positive treatment effects. These two points combine into an argument in favour of premise (3).
Let’s consider the first of these points in more detail. You might think that the methods of medical science are objective and scientific. Maybe you have read something about evidence based medicine. If so, you might well ask: Haven’t medical scientists established clear protocols for conducting medical trials? And haven’t they agreed upon a hierarchy of evidence when it comes to confirming whether a treatment is effective or not? Yes, they have. There is widespread agreement that randomised control trials are the gold standard for testing the effectiveness of a treatment, and there are detailed protocols in place for conducting those trials. Similarly, there is widespread agreement that you should not over-rely on one trial or study when making the case for a treatment. After all, one trial could be an anomaly or statistical outlier. Meta-analyses and systematic reviews are desirable because they aggregate together many different trials and see what the general trends in evidence are.
But Stegenga argues that this widespread agreement about evidential standards masks considerable problems with malleability. For example, when researchers conduct a meta-analysis, they have to make a number of subjective judgments about which studies to include, what weighting to give to them and how to interpret and aggregate their results. This means that different groups of researchers, conducting meta-analyses of the exact same body of evidence, can reach different conclusions about the effectiveness of a treatment. Stegenga gives examples of this in chapter 6 of the book. The same is true when it comes to conducting randomised control trials (chapter 7) and measuring the effectiveness of those trials (chapter 8). There are sophisticated tools for assessing the quality of evidence and the measures of effectiveness, but they are still prone to subjective judgment and assessment, and different researchers can apply them in different ways (more technically, Stegenga argues that the tools have poor ‘inter-rater reliability’ and poor ‘inter-tool reliability’). Again, he gives several examples of how these problems manifest in the book.
The malleability of the evidential tools might not be such a problem is everybody used those tools in good faith. This is where Stegenga’s second claim — about the problem of incentives — rears its ugly head. The incentives in medical science are such that not everyone is inclined to use the tools in good faith. Pharmaceutical companies need treatments to be effective if they are to survive and make profits. Scientists also depend on finding positive effects to secure career success (even if they are not being paid by pharmaceutical companies). This doesn’t mean that people are always explicitly engaging in fraud (though some definitely are) it just means that everyone operating within the institutions of medical research has a significant interest in finding and reporting positive effects. If a study doesn’t find a positive effect, it tends to go unreported. Similarly, and because of the same incentive structures, there is a significant bias against finding and reporting on the harmful effects of interventions.
Stegenga gives detailed examples of these incentive problems in the book. Some people might push back against his argument by pointing out that the problems to which he appeals are well-documented (particularly since the reproducibility crisis became common knowledge in the past decade or so) and steps have been taken to improve the institutional structure through which medical evidence is produced. So, for example, there is a common call now for trials to be pre-registered with regulators and there is greater incentive to try to replicate findings and report on negative results. But Stegenga argues that these solutions are still problematic. For example, the registration of trial and trial data, by itself, doesn’t seem to stop the over-reporting of positive results nor the approval of drugs with negative side effects. One illustration of this is the drug rosiglitazone, which is a drug for type-2 diabetes (Stegenga 2018, p 148). Due to a lawsuit, the drug manufacturer (GlaxoSmithKline) was required to register all data collected from forty-two trials of the drug. Only seven trials were published, which unsurprisingly suggested that the drug had positive effects. The drug was approved by the FDA in 1999. Later, in 2007, a researcher called Steven Nissen accessed the data from all 42 trials, conducted a meta-analysis, and discovered that the drug increased the risk of heart attack by 43%. In more concrete terms, this meant that the drug was estimated to have caused somewhere in the region of 83,000 heart attacks since coming on the market. All of this information was available to both the drug manufacturer and, crucially, the regulator (the FDA) before Nissen conducted his study. Indeed, internal memos from the company suggested that they were aware of the heart attack risk years before. But yet they had no incentive to report it and the FDA, either through incompetence or lack of resources, had no incentive to check up on them. That’s just one case. In other cases, the problem goes even deeper than this, and Stegenga gives some examples of how regulators are often complicit in maintaining the secrecy of trial data.
To reiterate, this doesn’t do justice to the nuance and detail that Stegenga provides in the book, but it does, I think, hint that there is a strong argument to be made in favour of premise (3).
5. Criticisms and Replies
What about objections to the argument? Stegenga looks at six in chapter 11 of the book (these are in addition to specific criticisms of the individual premises). I’ll review them quickly here.
The first objection is that there is no way to make a general philosophical case for medical nihilism. Whether any given medical treatment is effective depends on the empirical facts. You have to go out and test the intervention before you can reach any definitive conclusions.
Stegenga’s response to this is that he doesn’t deny the importance of the empirical facts, but he argues, as noted in the introduction to this article, that the hypothesis that any given medical intervention is effective is not purely empirical. It depends on metaphysical assumptions about the nature of disease and treatment, as well as epistemological/methodological assumptions about the nature of medical evidence. All of these have been critiqued as part of the argument for medical nihilism.
The second objection is that modern “medicine is awesome” and the case for medical nihilism argument doesn’t properly acknowledge its awesomeness. The basis for this objection presumably lies in the fact that some treatments appear to be very effective and that health outcomes, for the majority of people, have improved over the past couple of centuries, during which period we have seen the rise of scientific medicine.
Stegenga’s response is that he doesn’t deny that some medical interventions are awesome. Some are, after all, magic bullets. Still, there are three problems with this “medicine is awesome” objection. First, while some interventions are awesome, they are few and far between. For any randomly chosen or novel intervention the odds are that it is not awesome. Second, Stegenga argues that people underestimate the role of non-medical interventions in improving general health and well-being. In particular, he suggests (citing some studies in support of this) that changes in hygiene and nutrition have played a big role in improved health and well-being. Finally, Stegenga argues that people underestimate the role that medicine plays in negative health outcomes. For example, according to one widely-cited estimate, there are over 400,000 preventable hospital-induced deaths in the US alone every year. This is not “awesome”.
The third objection is that regulators help to guarantee the effectiveness of treatments. They are gatekeepers that prevent harmful drugs from getting to the market. The put in place elaborate testing phases that drugs have to pass through before they are approved.
This objection holds little weight in light of the preceding discussion. There is ample evidence to suggest that regulatory approval does not guarantee the effectiveness of an intervention. Many drugs are withdrawn years after approval when evidence of harmfulness is uncovered. Many approved drugs aren’t particularly effective. Furthermore, regulators can be incompetent, under-resourced and occasionally complicit in hiding the truth about medical interventions.
The fourth objection is that peer review helps to guarantee the quality of medical evidence. This objection is, of course, laughable to anyone familiar with the system of peer review. There are many well-intentioned researchers peer-reviewing one another’s work, but they are all flawed human beings, subject to a number of biases and incompetencies. There is ample evidence to suggest that bad or poor quality evidence gets through the peer review process. Furthermore, even if they were perfect, peer reviewers can only judge the quality of the studies that are put before them. If those studies are a biased sample of the total evidence, peer reviewers cannot prevent a skewed picture of reality from emerging.
The fifth objection is that the case for medical nihilism is “anti-science”. That’s a bad thing because there is lots of anti-science activism in the medical sphere. Quacks and pressure groups push for complementary therapies and argue (often with great success) against effective mainstream interventions (like vaccines). You don’t want to give these groups fodder for their anti-science activism, but that’s exactly what the case for medical nihilism does.
But the case for medical nihilism is definitely not anti-science. It is about promoting good science over bad science. This is something that Stegenga repeatedly emphasises in the book. He looks at the best quality scientific evidence to make his case for the ineffectiveness of interventions. He doesn’t reject or deny the scientific method. He just argues that the best protocols are not always followed, that they are not perfect, and that when they are followed the resulting evidence does not make a strong case for effectiveness. In many ways, the book could be read as a plea for a more scientific form of medical research, not a less scientific form. Furthermore, unlike the purveyors of anti-science, Stegenga is not advocating some anti-science alternative to medical science — though he does suggest we should be less interventionist in our approach to illness, given the fact that many interventions are ineffective.
The sixth and final objection is that there are, and will be soon, some “game-changing” medical breakthroughs (e.g. stem cell treatment or genetic engineering). These breakthroughs will enable numerous, highly effective interventions. The medical nihilist argument doesn’t seem to acknowledge either the reality or possibility of such game-changers.
The response to this is simple. Sure, there could be some game-changers, but we should be sceptical about any claim to the effect that a particular treatment is a game-changer. There are significant incentives at play that encourage people overhype new discoveries. Few of the alleged breakthroughs in the past couple of decades have been game-changers. We also know that most new interventions fail or have small effect sizes when scrutinised in depth. Consequently, a priori scepticism is warranted.
6. Conclusion
That brings us to the end of the argument. To briefly summarise, medical nihilism is the view that we should be sceptical about the effectiveness of medical interventions. There are three reasons for this, each corresponding to one of the key probabilities in Bayes’ Theorem. The first reason is that the prior probability of a treatment being effective is low. This is something we can infer from the long history of failed medical interventions, and the fact that there are relatively few medical magic bullets. The second reason is that the probability of the evidence for effectiveness, given the hypothesis that an intervention is effective, is low. We know this because the best available evidence concerning medical interventions suggest they have very small effect sizes, and there is often a lot of discordant evidence. Finally, the third reason is that the prior probability of observing evidence suggesting that a treatment is effective, irrespective of its actual effectiveness is high. This is because medical evidence is highly malleable, and there are strong incentives at play that encourage people to present positive evidence and hide/ignore negative evidence.
* For Bayes afficionados: yes I know that this is the short form of the equation and I know I have reversed the order of two terms in the equation from the standard presentation. |
2abb413204261ee1 | # Application of the Schrödinger Theory
# Particle Inside an Infinite Potential Well
A 1D well of length of aa with infinitely high walls,
Ep(x)={for0>x>a0for0xaE_{p}(x)=\left\{\begin{array}{l}\infty \; \text { for } \; 0>x>a \\ 0 \; \text { for } \; 0 \leq x \leq a\end{array}\right.
Since EpE_p outside (EpoE_{po}) is \infty, one must provide an infinite amount of energy to pull the particle out of the well. This means that the particle cannot get out.
Our task
To find all the possible wavefunctions that can be associated with the particle inside such a well. Find the position and momentum expectations.
不同的EE對應不同的wavefunction, 不同的wavefunction對應了不同的概率密度ψ2|\psi|^2. 在不同的能量下,檢測到粒子的分佈的概率密度是不一樣的。我們說粒子的概率分佈是有一個前提的,這個前提就是在某個能量下的概率分佈。想要更好地確定粒子的位置最好是要知道他所處的能量。
ψ(x,t)=χ(x)Γ(t)=χ(x)eiGt\psi(x, t) = \chi(x) \Gamma(t) = \chi(x) e^{\frac{-iG}{\hbar}t}
# Step 1 Outside of the well
Before finding all the possible wavefunctions that can be associated with the particle inside such a well, let us answer the question: What is the eighenfunction χ\chi outside? The answer is χ=0\chi = 0. For a particle to get out, and infinite energy must be provided. This is physically impossible, and therefore χ2=0|\chi|^2 = 0 outside the well,
χ=0for0xa\chi = 0 \; \text{for} \; 0 \geq x \geq a
# Step 2 General solutions
To evaluate χ\chi inside, we must solve 22md2χdx2+Ep(x)χ=Eχ-\frac{\hbar^{2}}{2 m} \frac{d^{2} \chi}{d x^{2}}+E_{p}(x) \chi=E \chi for the case Ep=0E_p = 0,
Divide both sides by 2m-\frac{\hbar}{2m} and put both terms on the left side ot get
d2χdx2+2mE2χ=d2χdx2+k2χ=0\frac{d^2\chi}{dx^2} + \frac{2mE}{\hbar^2} \chi = \frac{d^2\chi}{dx^2} + k^2 \chi = 0
where k2=2mE2k^2 = \frac{2mE}{\hbar^2}. The equation is a second-order differential equation with constant coefficients. Try a solution of the form,
χ=eαx\chi = e^{\alpha x}
where α\alpha is a constant as yet undetermined. We obtain
dχdx=αeαx\frac{d\chi}{dx} = \alpha e^{\alpha x}
d2χdx2=α2eαx\frac{d^2\chi}{dx^2} = \alpha^2 e^{\alpha x}
α2eαx+k2eαx=0\alpha^2 e^{\alpha x} + k^2 e^{\alpha x} = 0
α2+k2=0α=±ik\alpha ^2 + k^2 = 0 \Rightarrow \alpha = \pm ik
The two solutions are therefore eikxe^{ikx} and eikxe^{-ikx}. The general solution will be the sum of these two
χ(x)=aeikx+beikx=(a+b)cos(kx)+i(ab)sin(kx)\chi(x) = a e^{ikx} + b e^{-ikx} = (a + b) \cos(kx) + i(a-b) \sin(kx)
where aa and bb are arbitrary constants. Because aa and bb are arbitrary constantd, we simplify the notation by intruducing new constatns AA and BB,
χ(x)=Acos(kx)+Bsin(kx)\chi(x) = A \cos(kx) + B\sin(kx)
AA and BB are now the arbitrary constants to be determined from physical considerations.
# Step3 Well-behaved
Thus far, no restrictions have been found as to the values that kk and therefore EE can take. However, as soon as we require that χ\chi be well-behaved, the restrictions on kk and EE will appear. For a well-behaved χ\chi, three requirments must be satisfied for χ\chi and dχ/dxd\chi/dx: 1. Finite 2. Single value 3. Continous.
The conditions of 1 and 2 are satisfied by χ\chi. The condition that χ\chi be continuous requires that χ\chi be 0, at x=0x=0 and at x=ax=a, because χ\chi is zero outside the well.
At the left side of the well,
χ(0)=A×1+B×0=A=0\chi(0) = A\times 1 + B \times 0 = A = 0
χ=Bsin(kx)\chi = B\sin(kx)
At the right side of the well,
χ(a)=Bsin(ka)=0\chi(a) = B\sin(ka) = 0
There are two ways to satisfy. Either B=0B=0 or sin(kx)=0\sin(kx) = 0. If B=0B=0, the χ=0\chi =0 everywhere. This means that the paricle is not in the well. The only meaningful way to satisfy the condition of continuity is when sinka=0\sin ka = 0, that ka=0,π,2π,ka=0, \pi, 2\pi, \cdots or
k=nπa,n=1,2,3,k = n\frac{\pi}{a}, \; n = 1, 2, 3, \cdots
Notice that n=0n=0 is not an acceptable choice. If n=0n=0, then k=0k=0 and χ\chi would be zero everywhere; that is the particle could not be in the well. The set of eigenfunctions are
χn=Bsin(nπax)\chi_n = B \sin(n \frac{\pi}{a} x)
By substituding kk to k2=2mE2k^{2}=\frac{2 m E}{\hbar^{2}}, we get a set of eigenvalues
k2=2mE2n2π2a2=2mE2k^2 = \frac{2mE}{\hbar^2} \Rightarrow \frac{n^2 \pi^2}{a^2} = \frac{2mE}{\hbar^2}
En=n2π222ma2=n2E0,whereE0=π222ma2E_n = n^2 \frac{\pi^2 \hbar^2}{2ma^2} = n^2 E_0, \; \text{where} \; E_0 = \frac{\pi^2 \hbar^2}{2ma^2}
We should not that the first derivative of the eigenfunctions in this case is not continuous at x=0x=0 and at x=ax=a.
Left side of the well:
0nπBacos(nπax)x=0=nπBa0 \neq \frac{n\pi B}{a}\cos (n\frac{\pi}{a} x) \lvert_{x=0} = \frac{n\pi B}{a}
Right side of the well:
0nπBacos(nπax)x=a=nπBacos(nπ)0 \neq \frac{n\pi B}{a}\cos (n\frac{\pi}{a} x) \lvert_{x=a} = \frac{n\pi B}{a} \cos (n\pi)
As a result, then d2χ/dx2d^2\chi/dx^2 would be infinite at 00 and aa. From the time-independent Schrödinger Equation, this would imply that either EE or EpE_p is infinite. In this idealized example Ep=E_p = \infty.
The solution of the Schrödinger Equation has given us a set of eigenfunctions χ\chi that can be used to describe a particle in the potential well. It does not tell us what particular χ\chi is associated withe the particle. Which particular χ\chi one assigns to it depends on how the particle was palced in the well and what is done to the particle afterward. If we leave the particle alone, it will, following the tendency of all physical systems, tend to go to the lowest energy state available, which is called the ground state or E=E0E = E_0.
In the present case of lowest energy, the eigenfunction representing the x-position of the particle, will be,
χ(x)=Bsin(πax)\chi(x) = B \sin\left(\frac{\pi}{a} x\right)
When the χ(x)\chi(x) is multiplied by the time part of the wavefunction,
Γ(t)=eiE0t\Gamma(t)=e^{\frac{-i E_0}{\hbar} t}
the resulting wavefunction
ψ1(x,t)=Bsin(πax)eiE0t\psi_1(x,t) = B\sin\left(\frac{\pi}{a} x\right) e^{\frac{-i E_0}{\hbar} t}
A=a+biA=a2+b2A2=a2+b2A = a + bi \Rightarrow |A| = \sqrt {a^2 + b^2} \Rightarrow |A|^2 = a^2 + b^2 A=abiAA=(abi)(a+bi)=a2+b2A^* = a - bi \Rightarrow A^*A = (a-bi)(a+bi) = a^2 + b^2. We obtain
A2=AA|A|^2 = A^* A
ψ12=ψ1ψ1=B2sin2(πax)(eiE0t)(eiE0t)=B2sin2(πax)|\psi_1|^2 = \psi_1^* \psi_1 = B^2 \sin^2(\frac{\pi}{a}x) \left(e^{\frac{i E_0}{\hbar} t}\right) \left(e^{\frac{-i E_0}{\hbar} t}\right) = B^2 \sin^2(\frac{\pi}{a}x)
# Step 4 Normalization
The probability of finding the particle somewhere in space must be 1. In mathematical terms this fact is stated as follows:
ψ2dx=0aψ2dx=0aB2sin2(πax)dx=1\int_{-\infty}^{\infty} |\psi|^2 dx= \int_{0}^{a} |\psi|^2 dx = \int_{0}^{a} B^2 \sin^2(\frac{\pi}{a}x) dx = 1
From the equation 64 in the integration tabel (opens new window).
B2[x2sin2πax4πa]0a=1B^2\left[\frac{x}{2} - \frac {\sin2\frac{\pi}{a}x}{4\frac{\pi}{a}} \right]_0^a = 1
B2((a2)(0))=1\Rightarrow B^2 \left((\frac{a}{2}) - (0)\right) =1
B=2aB = \sqrt{\frac{2}{a}}
Therefore, when the particle goes to the ground state, the eigenvalue is
E=E0=π222ma2E = E_{0}=\frac{\pi^{2} \hbar^{2}}{2 m a^{2}}
the eigenfunction is
χ1(x)=2asin(πax)\chi_1(x) = \sqrt{\frac{2}{a}} \sin\left( \frac{\pi}{a}x\right)
the wavefunction is
ψ1(x,t)=χ1(x)Γ1(t)=2asin(πax)eiE0t\psi_1(x,t) = \chi_1(x) \Gamma_1(t) = \sqrt{\frac{2}{a}} \sin\left( \frac{\pi}{a}x\right) e^{\frac{-i E_0}{\hbar} t}
ψ1(x,t)=ψ1=2asin2(πax)|\psi_1(x,t)|= |\psi_1| = \frac{2}{a} \sin^2\left(\frac{\pi}{a}x\right)
# Step 5 Expectations
Consider a particle in the ground state. Average position
x¯=ψ1xψ1dx=2a0asin2(πax)xdx=a2\bar{x} = \int_{-\infty}^{\infty} \psi_1^{*}x\psi_1 dx = \frac{2}{a} \int_0^a \sin^2 \left(\frac{\pi}{a}x\right) x dx = \frac{a}{2}
p¯=ψ(ix)ψdx=0\bar{p} = \int_{-\infty}^{\infty} \psi^* ( -i \hbar \frac{\partial}{\partial x}) \psi dx = 0
E¯=ψEψdx=ψ(iψt)dx=E0\bar{E} = \int_{-\infty}^{\infty} \psi^{*} E \psi dx = \int_{-\infty}^{\infty} \psi^{*} \left( i \hbar \frac{\partial \psi}{\partial t} \right) dx = E_0
# Harmonic Oscillator
# Classical
Suppose a body os mass mm is connected to a massless spring, with a spring constant kk, and the body is free to oscillate on the frictionless surface. At its rest, or equilibrium position, the position coordinate is x=0x=0. If the body is pushed to compress the distance x0x_0, or pulled to stretch it a distance x0x_0, and then released, the body will then begin to oscillate. We may calculate its subsequent motion from Newton's second law. If you pull on a spring with force FF it pulls in the opposite direction with force F-F. Thys, the force that the spring exerts on the body is kx-kx. The acceleration is not constant, we use the fundamental definition ax=d2x/dt2a_x=d^2x/dt^2. Applying Newton's second law to the body, we obtain
kx=md2xdt2(10.10)-kx = m \frac{d^2x}{dt^2} \;\;\; \text{(10.10)}
We may guess a solution.
x=Asin(ωt+ϕ)(10.9)x = A \sin(\omega t + \phi) \;\;\; \text{(10.9)}
Then, we obtain
dxdt=Aωcos(ωt+ϕ)\frac{dx}{dt} = A\omega\cos(\omega t + \phi)
d2xdt2=Aω2sin(ωt+ϕ)\frac{d^2x}{dt^2} = -A\omega^2\sin(\omega t + \phi)
Substitute these into the main equantion
kAsin(ωt+ϕ)=mAω2sin(ωt+ϕ)- k A \sin (\omega t + \phi) = - m A \omega^2 \sin(\omega t + \phi)
k=mω2\Rightarrow k = m \omega^2
ω=km(10.12)\Rightarrow \omega = \sqrt{\frac{k}{m}} \;\;\; \text{(10.12)}
Therefore, Eq. 10.9 is a solution when the constants have the relation of Eq. 10.12. Using ω=2πν\omega = 2\pi\nu, we obtain the frequency of osillation
and the period
T=1ν=2πmkT = \frac{1}{\nu} = 2\pi \sqrt{\frac{m}{k}}
# Quantum world
Ep=12kx2E_p = \frac{1}{2} kx^2
En=(n+12)kmE_n = \left( n + \frac{1}{2} \right) \hbar \sqrt{\frac{k}{m}}
We have ω=km\omega = \sqrt{\frac{k}{m}}, we obtain
En=(n+12)ω=(n+12)hνE_n = \left( n + \frac{1}{2} \right) \hbar \omega = \left( n + \frac{1}{2} \right) h \nu
1. ω=km\omega = \sqrt{\frac{k}{m}} 是通過牛頓得來的,為什麼要代在這裡來使用?
2. 量子簡協振子又不會像我們認為的樣子去振動,這裡的ω,ν\omega, \nu 這些究竟又代表什麼意思?
# Some important aspects
1. The difference between adjacent energy levels is a constant hνh\nu. The is consistent with Planck's blackbody theory.
2. E0=12hνE_0 = \frac{1}{2} h \nu, which is not equal to 0. This is different from Plank's blackbody theory, i.e,, E=nhν,E0=0)E = nh\nu, E_0 = 0)
3. The eigenfunction on the ground state is
χ0=Cemkx22\chi_0 = C e^{-\frac{\sqrt{mk}x^2}{2\hbar}}
This equation seems incorrect.
where CC can be decided through
χ2dx=1C=\int_{-\infty}^\infty |\chi|^2 dx = 1 \Rightarrow C =
1. In classical mechanics, if E=12hν=12kxmax2E = \frac{1}{2} h\nu = \frac{1}{2}kx_\text{max}^2
xmax=2Ek=hνkx_\text{max} = \sqrt{\frac{2E}{k}} = \sqrt{\frac{h\nu}{k}}
In classical mechanics, the particle can not exceed xmaxx_\text{max}. But the quantum mechanics, the particle may exceed xmaxx_\text{max} with low probabilities.
Draw a image using Python
Quantum Tunneling
Quantum tunneling has been used in Scanning Tunneling Microscope (STM). You should try to understand how does it work.
ψn(x)=12nn!π1/4Hn(x)ex2/2\psi_n(x) = \frac{1}{\sqrt{2^nn!}\pi^{1/4}} H_n(x) e^{-x^2/2}
# Schrödinger equation for H atom
Process: 1. Change equation from xyz to spherical coordinates. 2. Separation of variables. 3. Solve three differential equations with three requirments.
In this process, we obtain three numbers(we call them quantum numbers) nn, ll, mlm_l.
Because the energy
En=Z2e4m8ϵ02h21n2E_n = - \frac{Z^2e^4m}{8\epsilon_0^2h^2} \frac{1}{n^2}
depends only on the quantum number nn, it is called the principal quantum number.
The restrictions on the these quantum numbes are
n=1,2,3,..n = 1, 2, 3, ..
l=0,1,2,...,n1l = 0, 1, 2, ..., n-1
ml=0,±1,±2,...,±lm_l = 0, \pm 1, \pm 2, ..., \pm l
where n>lmln > l \geq |m_l|, and
• ll is called the orbital quantum number, because ll determines the magnitude of the angular momentum L\bold{L} of the atom.
L=l(l+1)L = \sqrt{l(l+1)} \hbar
• mlm_l is called the magnetic quantum number, because mlm_l determines the orientation of the angular momentum L\bold{L} in a magnetic field. If an atom is placed in a magnetic field directed along the zz direction, the zz conponent of the angular momentum L\bold{L} of the atom is given by
Lz=ml(21.10)L_z = m_l \hbar \;\;\; (21.10)
The vector L\bold{L} is perpendicular to the plane of rotation. The result of Eq. 21.10 tells us that in an atom, L\bold{L} cannot have any arbitrary orientation with respect to the zz axis, but rather it can have only certain discrete orientations. This is known as space quantization.
Possible orientations of the angular momentum $L$ of the electron in the hydrogen atom for the case where the orbital quantum number $l$=2
Suppose l=2l=2, then mlm_l can be 2, 1, 0, -1, -2. Thus Lz=2,,0,,2L_z = 2\hbar, \hbar, 0, -\hbar, -2\hbar. By the way the magnitude to L\bold{L} can be calculated as L=2(2+1)=6L = \sqrt{2(2+1)} \hbar = \sqrt{6} \hbar. The spacial variation of χ\chi depends, on the three quntum numbers, and the wavefunciton is written with the quantum numbers sunscripts χnlml\chi_{nlm_l}. Because for a given nn the other two numbers can take several values, this means that it is possible for the electron to have quite different characteristics while maintaining the same energy. States χ\chi habing the same energy but different values for the quantum numbers ll and mlm_l are called degenerate states.
Partial representation of the degeneracy of the eigenfunctions in the hydrogen atom.
States for which
• l=0l=0 are called ss states
• l=1l=1 are called pp states
• l=2l=2 are called dd states
• l=3l=3 are called ff states.
The Sterm-Gerlach experiment is evident that there is a magnetic dipole other than the orbital dipole that has been overlooked. What has the originial Schrödinger theory overlooked?
The electon has an intrinsic angular momentum called the spin $\bold{S}. Just as the orital angular momentum L\bold{L} has a magnetic dipole associated with it, so does the spin. By analogy with the behavior of LL and in order to explain the experimental results, G. Uhlenbeck and S. Goudsmit postulated that the magnitude of SS and its zz component were quantized as follows
S=s(s+1)wheres=12S = \sqrt{s(s+1)} \hbar \;\;\text{where} \; s = \frac{1}{2}
Sz=mswherems=±12S_z = m_s \hbar \;\; \text{where} \; m_s = \pm \frac{1}{2}
Using the spin postulate, lots of experimental results can be explained. For us the main conclusion is that the state of an electron is now specified by four quantum numbers: nn, ll, lml_m, msm_s. Note that the quantum number ss is 12\frac{1}{2} for all individual electrons and thus we do not need to specify it further.
ms=12m_s = \frac{1}{2} is spin up and ms=12m_s = -\frac{1}{2} is spin down.
# Some features of the atomic wavefuctions
Why don't we talk about $\Gamma(t)$ anymore?
We have ψ=χΓ\psi = \chi \Gamma, why don't we leave Γ\Gamma behind.
Source: Visualizing All Things [1]
1. State ss (l=0l=0), χ2|\chi|^2 have spherical symemetry.
2. Other states, axial symmetry, but no spherical symmetry.
3. mlχn,l,mlχn,l,ml\sum_{m_l}\chi^*_{n,l,m_l}\chi_{n,l,m_l} has spherical symmetry.
4. By looking at the radial probability density P(r)P(r) (P(r)drP(r)dr is the probability of finding electron between rr and r+drr+dr). Given state nn, lower ll more likely to be found near the the nucleus.(Lower ll \rightarrow lower angular momentum LL).
[1] Youtube: Hydrogen Electron Orbital (opens new window) |
b0de84f9be971b7e | Persistent stability of a chaotic system
Huber, Greg; Pradas, Marc; Pumir, Alain and Wilkinson, Michael (2018). Persistent stability of a chaotic system. Physica A: Statistical Mechanics and its Applications, 492 pp. 517–523.
We report that trajectories of a one-dimensional model for inertial particles in a random velocity field can remain stable for a surprisingly long time, despite the fact that the system is chaotic. We provide a detailed quantitative description of this effect by developing the large-deviation theory for fluctuations of the finite-time Lyapunov exponent of this system. Specifically, the determination of the entropy function for the distribution reduces to the analysis of a Schrödinger equation, which is tackled by semi-classical methods. The system has generic instability properties, and we consider the broader implications of our observation of long-term stability in chaotic systems.
Viewing alternatives
Download history
Public Attention
Altmetrics from Altmetric
Number of Citations
Citations from Dimensions
Item Actions |
bb0958fbfa609dd0 | arvin-ash 6 months ago
How Quantum Mechanics Predicts the Structure of Atoms
Fantastic! I’m a chemist and you explained the concept in a 20 min video that took me 4 years to understand!
00:00 - The question: Why atoms are structured this way
01:30 - It's all about energy
02:48 - How Schrodinger equation predicts elements
03:20 - Why are shell numbers so special?
06:08 - The key solving the wave function
08:33 - Visualizing atoms from wave function
09:44 - How shell configurations correspond to periodic table
12:02 - Orbitals and shells are not the same
13:00 - How to learn more about the periodic table
But why do certain elements have similar properties? Their properties have to do with the way electrons are arranged around the nucleus of atoms. But why are electrons arranged specifically in certain orbitals and shells? The structure of atoms can be predicted by quantum mechanics. It can explain the entire periodic table of elements.
Some configurations of electrons are more energy efficient than others. And this energy can be calculated using the Schrodinger equation. The lowest energy configurations of electrons is when the electron shells are either empty or full.
What are atomic shells?
The Bohr model of the hydrogen atom showed that electrons can only exist in certain stable orbits around a nucleus where its angular momentum is proportional to Planck’s constant. These are like shells around the nucleus.
Erwin Schrodinger showed that rather than being confined to an orbit like a planet, an electron is a matter wave that forms a probability cloud in 3D space smeared around the nucleus.
The Schrodinger equation showed that each shell has a maximum number of electrons it can hold. The inner most shell holds a maximum 2 electrons, the second 8 electrons, then 18, then 32, then 50, then 72 and so on.
Chemistry is about substances exchanging or sharing electrons in order to fully fill up these shells
To understand why certain shells can only hold a specific number of electrons, we have to solve the Schrödinger equation, which is just a statement of energy conservation. It says that total energy is equal to potential energy plus kinetic energy. This equation can be solved most easily for the hydrogen atom because it is the simplest atom - just one electron orbiting one proton.
The key term to solve is psi, the wave function of the atom. It represents a value that is related to the probability of the atom being in certain quantum states. In order to solve these, you have to specify some quantum values for the atom, which are represented by n, l and m -- n the electron shell layer, l is a quantum number defining the orbital angular momentum, and m specifies the orientation in space of the orbital.
When we plug in different values for n, l, and m in the Schrodinger equation for the hydrogen atom, it also approximately represents the solution to ALL the quantum states of electrons for any other atom. So, this equation allows us to predict how electrons behave for all the elements of the periodic table.
Different values of n, l and m can be shown to represent different structures of atoms because these quantum numbers explain the possible shells or orbitals available to the electrons.
We find that they total number of configurations is equal the electron shell numbers. So, the structure of the periodic table can be understood just by solving the Schrödinger equation for the hydrogen atom.
But can the hydrogen solution really work for all atoms?
The answer no, this hydrogen solution isn’t exactly correct. It gets less accurate for larger atoms. So if you look in detail, there are some small variations for the larger atoms. And in those cases, the orbits can be occupied in a slightly different ways.
But the problem is that the Schrodinger equation becomes so unwieldy for large atoms, that it cannot be solved. But since we can solve it exactly for hydrogen, it can give us a good understanding for other atoms nevertheless.
Solving for the wave function also shows for example, the the orbits associated with a higher value of “l” are not necessarily the shells with the next lowest energy. This is why you learned in chemistry, that instead of starting the d-orbit in the third row, you start with the next s-orbit.
The reason is because the wave function shows that when “n=4” and “l=0” in the s-orbit, it is actually more energy efficient than at the d-orbit. But after that, it becomes more efficient to fill up the d-orbitals.
So simply put, depending on the quantum numbers of the wavefunction, you fill up the orbits in terms of what requires the least energy. And voila, you get the periodic table!
At the end of the day, chemistry is just quantum mechanics with electrons!
Arvin Ash
No email-address required.
Just add your comment,
a UNIQUE username
and a phrase to be remembered by.
You will be automagically signed-in. |
166a5fd169380ebb | A strange episode at the end of the last Torah portion, Balak, where Phinehas (Pinchas) slain a Jewish prince caught in the act with a heathen woman, is rewarded in this week’s eponymous Torah portion with the priesthood. This begs the question, what is the connection between the act of zealotry by Phinehas and the reward of priesthood he receives for it?
By way of background, as we read in the previous Torah portion, the evil king, Balak, fails to bring a curse on the Jewish people by Balaam (Bilam). According to Midrash, Balaam advises Balak to send most beautiful Midian women to seduce Jewish men (see Flavius Josephus’s Antiquities of the Jews, Book IV, Chapter VI, Paragraphs 6-12). Balak heads the advice and uses Moabite and Midianite women to seduce Jewish men who promptly fall into the trap. Comingling with Moabite and Midianite women, Jewish men began to worship the idol of Baal-peor. The plague breaks out among Jews.
Jews were chosen by G‑d to be His holy nation. “Ye shall be holy; for I, the Lord, your G‑d, am holy,” says G‑d to the Jewish nation (Leviticus 19:2). As I explained in my earlier post, Ye Shall be Disentangled, holy (kadosh in Heb.) means separated. As one of the consequences of this injunction, Jews may not get entangled with heathen nations through sexual union. Sexual immorality with Midianite women eroded the boundaries of separation and entangled the holy nation with a heathen nation. This had catastrophic consequences killing twenty-four thousands of Jews in a plague. The whole nation was thrown in an untenable state of superposition of holy and unholy.
When a Jewish prince, Zimri, brings a Midianite princess, Cozbi (Kozbi) into his tent in front of Moses and the elders of Israel and cohabits with her, Moses is uncertain of what to do. This uncertainty may very well be the result of the spiritual decline of the entire nation due to sexual immorality. Pinchas takes a spear and pierces this cloud of uncertainty (characteristic of the wavefunction before it is collapsed) by piercing man and the woman engaged in a sexual act, thereby collapsing the wavefunction, as it were. This ended the plague and restored the sanctity of the holy nation. Thus far is the last Torah portion, Balak (Num. 25).
In this Torah portion, Phinehas is rewarded with priesthood – the only time in history when the priesthood (kehunah) was given to someone who has not inherited it. Why? What is it in the job description of a kohen (priest) that Phinehas had a particular talent and qualification for? Aside from ministering to G‑d and performing service in the Temple, an important function of a kohen was to declare people, houses and objects afflicted of Tzoras and negoim (spiritual maladies), pure (tahor) or impure (tameh). (See my post “Tumah and Taharah”). Before a priest rules on the status of a person or a house, that person or a house is in a state of superposition of the states of spiritual purity (tahara) and spiritual impurity (tumah). The priest collapses the wavefunction, upon which the person or the house is either pure or impure (and has to go through the process of purification administered by the priest).
By killing the adulterers, and, thereby, collapsing the superposition of holy and unholy, Phinehas demonstrated a unique talent for collapsing the wavefunction the “right” way as it is required of a kohen-priest, thereby earning him and his descendants the priesthood, for which he seemed uniquely qualified.
If knowing how to collapse the wavefunction “the right way” qualifies one for the priesthood, one may be tempted to say that present day’s priests are physicists who study quantum mechanics and know a lot about the wavefunction. Alas, it is not so. No matter how skilled we may be at solving the Schrödinger equation and calculating wavefunction, we have no idea how to affect its collapse. We don’t even know what causes the collapse of the wavefunction or why and how it occurs. We just know that before an experiment, a quantum-mechanical object is described by the wavefunction, whose square amplitude is the probability of finding that object in a given region of space. When we perform a measurement on the object, we find it in a particular place. In other words, the cloud of uncertainty described by the wavefunction collapses into a single point or a certain value that we can measure. The outcome of this collapse is totally random and only obeys the laws of statistics. We have no way of affecting the collapse of the wavefunction and the outcome thereof, which is totally random.
What causes the collapse of the wavefunction, is a subject of considerable debate. Some notable physicists like John Von Neumann, Nobel laureate, Eugene Wigner, John Archibald Wheeler, thought that it is the consciousness of an observer that causes the collapse. According to them, only a conscious observer (called by Wheeler a “participating observer”) can collapse the wavefunction. If it is done by our consciousness, it is done on a subconscious level. It stands to reason that people who have control of their subconscious may be able to affect the collapse of the wavefunction. Hence, we have a Chasidic dictum, trach gut, zain gut! – Think good and shall be good!
As I explained in the previous post, Secrets of the Talking Ass, all miracles are effectuated by collapsing the wavefunction the right way. In my humble opinion, tzadikim – the righteous holy men – are the priests of today. By giving a blessing, a tzadik (a holy man) may be able to affect the future by collapsing the wavefunction “the right way” without breaking laws of nature.
Printer Friendly |
dd1862e19748a8c8 | Sunday, November 30, 2014
The life of theoretician trying to be worth of his salt is full of worrying: it is always necessary to make internal consistency checks. One of the worries is whether the hypothesis heff=n× h = hgr = GMm/v0 is really consistent with TGD inspired quantum biology or has wishful thinking made its way to the arguments? More precisely, does the nominal value Bend= .2× 10-4 Tesla of "endogenous" magnetic field suggested by the effects of ELF em fields on brain give electron cyclotron energy E= heff eBend/2π m in few eV range for the value of n in question?
Some background
First some background.
1. The identification heff= hgr, where hgr is what I call gravitational Planck constant
hgr= GMmv0=rSm2β0 , β0= v0/c
makes the model quantitive. In the expression of hgr M is the "large" mass - naturally Earth's mass ME. m would be the mass of 4He atom. rS= 2GM/c denotes Schwartschild radius of Earth, which from ME= 3× 10-6 MSun and from rS(Sun)= 3 km is 4.5 mm. v0 would be some characteristic velocity for Earth-superfluid system and the rotation velocity v0= 465.1 m/s of Earth is a good candidate in this respect. Also the radius of Earth RE= 6.38× 106 meters will be needed.
2. One could fix the value of v0 in the following manner. Consider the Schrödinger equation for particle in gravitational field of a massive object at vertical flux tubes carrying the gravitational interaction. The solutions are Airy functions which decay very fast above some critical distance z0. Require that z0 is apart from a numerical factor equal to Earth radius. This condition predicts the value of v0 which is consistent in the case of Earth and Sun with earlier hypothesis about their values. For Sun v0 would be 5.65× 10-4c and for Earth orbital rotation velocity β0 scaled up from 1.6× 10-6 to 2.3× 10-6 by a factor 1.41≈ 21/2.
3. In TGD inspired biology the hypothesis hgr=heff=n× h plays a key role. One of the basic implications is that the energies of cyclotron photons associated with magnetic flux tubes have universal energy spectrum since the dependence on the mass of the charged particle disappears. Also the gravitational Compton length. The gravitational Compton length λgr=hgr/m does not depend on the mass of the particle and equals to λgr = GM/v0≈ 645 meters in the recent case. The scale of the superfluid system is thus much smaller than the coherence length.
4. Note that the nominal value of Bend is definitely not the only value in the spectrum of Bend. Already the model of hearing forces to allowing spectrum of about 10 octaves (3 orders of magnitude) corresponding the spectrum of audible frequencies. Also the geometric model of harmony correlating music and genetic code requires this.
Does hgr=heff hypothesis predict that the energy range of dark photons is that of biophotons?
Consider now the question whether the predicted value of n is consistent with the assumption that dark cyclotron photons have energies in visible and and UV range.
1. The value of integer n in heff=n× n equals to the ratio of
gravitational and ordinary Compton lengths
n= heffh= λgrc .
For electron one obtains n= .6× 1015. In the case of proton the frequency the ratio would be by a factor about 2× 103 higher.
The value of n is much higher than the lower bound 109/6 given as the ratio of visible photon frequency about 1014 Hz and cyclotron frequency f= 6× 105 Hz of electron in the magnetic field having the nominal value Bend=.2 Gauss of endogenous magnetic field. The discrepancy is six orders of magnitude. Desired value would correspond to magnetic field strengths of order Bend in Bgal=1 nT range which corresponds to the order of magnitude for galactic magnetic fields.
The value of n would give for Bend and an ion with 10 Hz cyclotron frequency (say Fe++ ion) energy of visible photon. The condition heff=hgr predicts a value of n which is at least by a factor mp/me≈ 211 higher and one must also now assume galactic magnetic field strength to obtain a sensible result.
2. The naive expectation was that Bend=.2× 10-4 Tesla should give energy in few eV range. Something goes definitely wrong since the magnetic fields in this value range should be in key role. Either the hypothesis heff=hgr is wrong or the model is somehow wrong.
How to modify the hgr= heff hypothesis?
It seems that one should modify the hypothesis hgr= heff somehow.
1. A formal generalization of form hgr= k heff, k integer could be imagined. It should guarantee that the cyclotron energies in Bend= .2 Gauss are in bio-photon range. This would be satisfied for k≈ Bend/Bgal ≈ 2× 104: the Compton wave length λeff would be a k-multiple of λgr. This kind of modification is of course completely adhoc unless one is able to find some physical and mathematical justification for it.
2. Could one justify the replacement of the velocity v0 with a velocity, which differs by factor k from the rotation velocity of Earth? This would give v0/c ≈ 3× 10-2. It is however difficult to find justification why the rotation velocity around Earth would be so large.
3. Could 1/k characterize the dark matter portion of Earth? This would require Mdark,E/ME≈ 5× 10-5 if one does not change the value of v0 constant. One might justify this hypothesis by saying that it is indeed dark matter to which the gravitational flux tubes with large value of Planck constant connect biomatter.
The hypothesis that only a fraction of dark matter is involved with couplings by dark gravitons seems to be rather feasible one. Is the modification consistent with the existing picture.
1. Can the model for the planetary system based on Bohr orbits tolerate this modification? This is the case only if the recent state of the planetary system reflects the past state, when most of the matter was dark. During the evolution of Sun and planets the dark matter would have gradually transformed to ordinary matter. This picture is consistent with the proposal that dark magnetic flux tube carry dark energy as magnetic energy and dark matter has large heff phases. It also explains the (only) 10 percent accuracy of predictions necessity to assume different v0 for inner and outer planets (vouter= vinner/5 but for Earth having principal quantum number n=5 both identifications are possible).
2. The model explaining the apparent ability of superliquids to defy gravity leads to a Schrödinger equation in gravitational field but h replaced with hgr. The value of the height parameter z0 associated with gravitational Schrödinger equation telling the height above which Schrödinger amplitude decays rapidly to zero is given by
z0=X/Y , X= [rS(E)RE2]1/3, Y= [4πβ02 ]1/3
is reduced by a factor k-1/3 ≈ .06 from value 2.85×107 km, which is about circumference of Earth to about 17 km, which corresponds to the vertical size scale of atmosphere so that nothing catastrophic occurs. The corresponding time scale corresponds to 170 Hz frequency.
The two-fluid picture for super-fluidity could correspond to the presence of ordinary and dark matter. All matter could be seen as multi-fluid in the sense that various particles labelled by hgr proportional to the particle mass would correspond to macroscopically quantum coherent superfluid phase.
3. The value of the gravitational Compton length in case of Earth is scaled down by a factor 1/k≈ 2× 10-4 to give Λgr≈ 12.9 cm. This corresponds to the length scale of brain hemisphere - and excellent candidate for macroscopically quantum coherent system - so that TGD inspired biology seems to tolerate the reduction.
To summarize, the hypothesis hgr=heff predicts universal dark cyclotron photon spectrum in bio-photon range only if the dark magnetic flux tubes couple biomatter to dark part of Earth, which should carry a portion of order 2× 10-4 of the Earth's mass. This means a correction to the earlier picture, which however does not change the overall picture in any manner. The fact that one has now precise quantitative estimate for the fraction of dark matter makes it easier to tolerate the feeling of embarrassment due to sloppy estimates.
For details and references see the new chapter Criticality and dark matter of "Hyper-finite factors and hierarchy of Planck constants" or the article Criticality and dark matter.
No comments: |
d1b5dd93903d4bd6 | Multiverses and Blackberries
Martin Gardner
There be nothing so absurd but that some philosopher [or cosmologist? -M.G.] has said it.
The American philosopher Charles Sanders Peirce somewhere remarked that unfortunately universes are not as plentiful as blackberries. One of the most astonishing of recent trends in science is that many top physicists and cosmologists now defend the wild notion that not only are universes as common as blackberries, but even more common. Indeed, there may be an infinity of them!
It all began seriously with an approach to quantum mechanics (QM) called “The Many Worlds Interpretation” (MWI). In this view, widely defended by such eminent physicists as Murray Gell-Mann, Stephen Hawking, and Steven Weinberg, at every instant when a quantum measurement is made that has more than one possible outcome, the number specified by what is called the Schrödinger equation, the universe splits into two or more universes, each corresponding to a possible future. Everything that can happen at each juncture happens. Time is no longer linear. It is a rapidly branching tree. Obviously the number of separate universes increases at a prodigious rate.
If all these countless billions of parallel universes are taken as no more than abstract mathematical entities-worlds that could have formed but didn’t-then the only “real” world is the one we are in. In this interpretation of the MWI the theory becomes little more than a new and whimsical language for talking about QM. It has the same mathematical formalism, makes the same predictions. This is how Hawking and many others who favor the MWI interpret it. They prefer it because they believe it is a language that simplifies QM talk, and also sidesteps many of its paradoxes.
There is, however, a more bizarre way to interpret the MWI. Those holding what I call the realist view actually believe that the endlessly sprouting new universes are “out there,” in some sort of vast super-space-time, just as “real” as the universe we know! Of course at every instant a split occurs each of us becomes one or more close duplicates, each traveling a new universe. We have no awareness of this happening because the many universes are not causally connected. We simply travel along the endless branches of time’s monstrous tree in a series of universes, never aware that billions upon billions of our replicas are springing into existence somewhere out there. “When you come to a fork in the road,” Yogi Berra once said, “take it.”
It is true that the MWI, in this realist form, avoids some of the paradoxes of QM. The so-called “measurement problem,” for example, is no longer a problem because whenever a measurement occurs, there is no “collapse of the wave function” (or rotation of the state vector in a different terminology). All possible outcomes take place. Schrödinger’s notorious cat is never in a mixed state of alive and dead. It lives in one universe, dies in another. But what a fantastic price is paid for these seeming simplicities! It is hard to imagine a more radical violation of Occam’s razor, the law of parsimony which urges scientists to keep entities to a minimum.
The MWI was first put forth by Hugh Everett III in a Princeton doctoral thesis written for John Wheeler in 1956. It was soon taken up and elaborated by Bryce DeWitt. For several years John Wheeler defended his student’s theory, but finally decided it was “on the wrong track,” no more than a bizarre language for QM and one that carried “too much metaphysical baggage.” However, recent polls show that about half of all QM experts now favor the theory, though it is seldom clear whether they think the other worlds are physically real or just abstractions such as numbers and triangles. Apparently both Everett and DeWitt took the realist approach. Roger Penrose is among many famous physicists who find the MWI appalling. The late Irish physicist John S. Bell called the MWI “grotesque” and just plain “silly.” Most working physicists simply ignore the theory as nonsense.
In an article on “Quantum Mechanics and Reality” (in Physics Today, September 1970), DeWitt wrote with vast understatement about his first reaction to Everett’s thesis: “I still recall vividly the shock I experienced on first encountering the multiworld concept. The idea of 10100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable, is not easy to reconcile with common sense. This is schizophrenia with a vengeance!”
In the MWI, most of its defenders agree, there is no room for free will. The multiverse, the universe of all universes, develops strictly along determinist lines, always obeying the deterministically evolving Schrödinger equation. This equation is a monstrous wave function which never collapses unless it is observed and collapsed by an intelligence outside the multiverse, namely God.
In recent years David Deutsch, a quantum physicist at Oxford University, has become the top booster of the MWI in its realist form. He believes that quantum computers, using atoms or photons and operating in parallel with computers in nearby parallel worlds, can be trillions of times faster than today’s computers. He is convinced that many famous QM paradoxes, such as the double slit experiment and a similar one involving two half-silvered mirrors, are best explained by assuming an interaction with twin particles in a parallel world almost identical with our own. For example, in the double slit experiment, when both slits are open, our particle goes through one slit while its twin from the other world goes through the other slit to produce the interference pattern on the screen.
Deutsch calls our particle the “tangible” one, and the particle coming from the other world a “shadow” particle. Of course in the adjacent universe our particle is the shadow of their tangible particle. Because communication between universes is impossible, it is hard to imagine why a particle would bother to jump from one universe to another just to produce interference.
Deutsch believes that the results of calculating simultaneously in parallel worlds can somehow be brought back here to coalesce. Critics argue that QM paradoxes, as well as quantum computers, are just as easily explained by conventional theory or by such rivals as the pilot wave theory of David Bohm. In any case, Deutsch’s 1997 book The Fabric of Reality: The Science of Parallel Universes-and Its Implications is the most vigorous defense yet of a realistic MWI.
Deutsch is fully aware that the MWI forces him to accept the reality of endless copies of himself out there in the infinity of other worlds. “I may feel subjectively,” he writes (p. 53), “that I am distinguished among the copies as the ‘tangible’ one, because I can directly perceive myself and not the others, but I must come to terms with the fact that all the others feel the same about themselves. Many of those Davids are at this moment writing these very words. Some are putting it better. Others have gone for a cup of tea.” And he is puzzled by the fact that so few physicists are as enthralled as he about the MWI!
Theoretical and experimental work on quantum computers is now a complex, controversial, rapidly growing field with Deutsch as its pioneer and leading theoretician. You can keep up with this research by clicking on Oxford’s Centre for Quantum Computation’s Web site
The MWI should not be confused with a more recent concept of a multiverse proposed by Andrei Linde, a Russian physicist now at Stanford University, as well as by a few other cosmologists such as England’s Martin Rees. This multiverse is essentially a response to the anthropic argument that there must be a Creator because our universe has so many basic physical constants so finely tuned that, if any one deviated by a tiny fraction, stars and planets could not form-let alone life appear on a planet. The implication is that such fine tuning implies an intelligent tuner.
Linde’s multiverse goes like this. Every now and then, whatever that means, a quantum fluctuation precipitates a Big Bang. A universe with its own space-time springs into existence with randomly selected values for its constants. In most of these universes those values will not permit the formation of stars and life. They simply drift aimlessly down their rivers of time. However, in a very small set of universes the constants will be just right to allow creatures like you and me to evolve. We are here not because of any overhead intelligent planning but simply because we happen by chance to be one of the universes properly tuned to allow life to get started.
We come now to a third kind of multiverse, by far the wildest of the three. It has been set forth not by a scientist but by a peculiar philosopher, now at Princeton University, named David Lewis. In his best-known book, The Plurality of Worlds (Oxford, 1986), and other writings, Lewis seriously maintains that every logically possible universe-that is, one with no logical contradictions such as square circles-is somewhere out there. The notion of logical possible worlds, by the way, goes back to Leibniz’s Theodicy. He speculated that God considered all logically possible worlds, then created the one He deemed best for His purposes.
Both the MWI and Lewis’s possible worlds allow time travel into the past. You need never encounter the paradox of killing yourself, yet you are still alive, because as soon as you enter your past the universe splits into a new one in which you and your duplicate coexist.
Most of Lewis’s worlds do not contain any replicas of you, but if they do they can be as weird as you please. You can’t, of course, simultaneously have five fingers on each hand and seven on each hand because that would be logically contradictory. But you could have a hundred fingers, and a dozen arms, or seven heads. Any world you can think of without contradiction is real. Can pigs fly? Certainly. There is nothing contradictory about pigs with wings. In an infinity of possible worlds there are lands of Oz, Greek gods on Mount Olympus, anything you can imagine. Every novel is a possible world. Somewhere millions of Ahabs are chasing whales. Somewhere millions of Huckleberry Finns are floating down rivers. Every kind of universe exists if it is logically consistent.
David Lewis’s mad multiverse was anticipated by hordes of science-fiction writers long before the MWI of QM came from Everett’s brain. More recent examples include Larry Nivens’s 1969 story “All the Myriad Ways” and Frederick Pohl’s 1986 novel The Coming of the Quantum Cats. Jorge Luis Borges played with the theme in his story ”The Garden of Forking Paths.” There is a quotation from this tale at the front of The Many Worlds Interpretation of Quantum Mechanics (1973), a standard reference by DeWitt and Neill Graham. For other examples of multiverses in science fiction and fantasy see the entry on “Parallel Worlds” in The Encyclopedia of Science Fiction (1995) by John Clute and Peter Nichols.
Fredric Brown, in What Mad Universe (1950), described Lewis’s multiverse this way:
There are, then, an infinite number of coexistent universes.
“They include this one and the one you came from. They are equally real, and equally true. But do you conceive what an infinity of universes means, Keith Winton?”
“Well-yes and no.”
“It means that, out of infinity, all conceivable universes exist.
“There is, for instance, a universe in which this exact scene is being repeated except that you-or the equivalent of you-are wearing brown shoes instead of black ones.
“There are an infinite number of permutations of that variation, such as one in which you have a slight scratch on your left forefinger and one in which you have purple horns and-”
“But are they all me?”
Mekky said, “No, none of them is you-any more than the Keith Winton in this universe is you. I should not have used that pronoun. They are separate individual entities. As the Keith Winton here is; in this particular variation, there is a wide physical difference-no resemblance, in fact.”
Keith said thoughtfully, “If there are infinite universes, then all possible combinations must exist. Then, somewhere, everything must be true.”
“And there are an infinite number of universes, of course, in which we don’t exist at all-that is, no creatures similar to us exist at all. In which the human race doesn’t exist at all. There are an infinite number of universes, for instance, in which flowers are the predominant form of life-or in which no form of life has ever developed or will develop.
“And infinite universes in which the states of existence are such that we would have no words or thoughts to describe them or to imagine them.”
I have here looked at only the three most important versions of a multiverse. There are others, less well known, such as Penn State’s Lee Smolin’s universes which breed and evolve in a manner similar to Darwinian theory. For a good look at all the multiverses now being proposed, see British philosopher John Leslie’s excellent book Universes (1989).
I find it hard to believe that so many academics take Lewis’s possible worlds seriously. As poet Armand T. Ringer has put it in a clerihew:
David Lewis
Is a philosopher who is
Crazy enough to insist
That all logically possible worlds actually exist.
Alex Oliver, reviewing Lewis’s Papers in Metaphysics and Epistemology, in The London Times Literary Supplement (January 7, 2000), closes by calling Lewis “the leading metaphysician at the start of this century, head and beard above his contemporaries.”
The stark truth is that there is not the slightest shred of reliable evidence that there is any universe other than the one we are in. No multiverse theory has so far provided a prediction that can be tested. In my layman’s opinion they are all frivolous fantasies. As far as we can tell, universes are not as plentiful as even two blackberries. Surely the conjecture that there is just one universe and its Creator is infinitely simpler and easier to believe than that there are countless billions upon billions of worlds, constantly increasing in number and created by nobody. I can only marvel at the low state to which today’s philosophy of science has fallen.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
db38e908c7eb8a30 | SIAM News Blog
"Bike Tracks," Quasi-magnetic Forces, and the Schrödinger Equation
By Mark Levi
In an invited talk at the 2013 SIAM Conference on Applications of Dynamical Systems, Mark Levi described a recently discovered connection between two distinct objects—the stationary Schrödinger equation and “bicycle tracks.” A request from SIAM News for an article based on the talk elicited the following well-illustrated article.
The stationary Schrödinger equation
where \({p}\) is the given potential, arises in many branches of mathematics, physics, and engineering and has been studied for well over a century. Known also as Hill’s equation, it comes up in studies of the spectrum of the hydrogen atom, in celestial mechanics, particle accelerators, forced vibrations, wave propagation, and many other problems. Hill’s equation plays a central role in explaining the complete integrability of the Korteweg–de Vries equation: One of the most remarkable mathematical discoveries of the last century is that the eigenvalues of the Schrödinger operator remain fixed if the potential \({p}\) evolves according to the KdV equation.
Figure 1. Idealized bike.
The 1989 Nobel Prize in Physics was awarded to W. Paul for his invention of the Paul trap—an electromagnetic trap that suspends charged particles. The mathematical substance of Paul’s discovery amounts to an observation on Hill’s equation, as Paul explained in his Nobel lecture [10]. (As an alternative to Paul’s computational explanation, a geometrical explanation of the workings of the Paul trap can be found in [5].) The stability of Kapitsa’s famous inverted pendulum (demonstrated experimentally by Stephenson in 1908, about half a century before Kapitsa’s paper) is also explained by the properties of Hill’s equation. Incidentally, a topological explanation of this counterintuitive effect can be found in [6].
The long history of Hill’s equation is reflected in the rich classical literature of the 18th and 19th centuries on the eigenfunctions of special second-order equations (including polynomials of Lagrange, Laguerre, Chebyshev, and Airy’s function), as well as in more recent work on inverse scattering and on the geometry of “Arnold tongues” [1,2,9,11].
Introducing bike tracks, the second partner in the new relationship, Figure 1 shows an idealized bike—a segment \({RF}\) of constant length that can move in the plane as follows: The front \({F}\) is free to move along any path, while the velocity of the rear end \({R}\) is constrained to the direction \({RF}\)—that is, the rear wheel doesn’t sideslip. Figure 2 shows some examples.
Figure 2. Some paths of the rear wheel as the front wheel traces out the heavier path multiple times.
The tire track problem does not have the pedigree of the Schrödinger equation, nor does it have the same rich history. Still, the problem has been studied since at least the 1870s (see [3] and references therein). It arises in differential geometry, and also as a model problem in engineering applications; a brief review and further discussion can be found in [4]. (Amusingly, the bicycle can be used to measure areas enclosed by planar curves: The actual device, the Prytz planimeter, is named after its inventor Holger Prytz, a 19th-century Danish cavalry officer.) A beautiful observation of R. Foote [3] gave rise to further developments, including a solution of Menzin’s conjecture from 1908 [8].
Figure 3. A quasi-magnetic force.
To describe the connection between the Schrödinger equation and the tire track problem, I specify a recipe that assigns, to every Schrödinger potential \(p(t)\), the path \((X(t), Y(t))\) of the front wheel in such a way that the Schrödinger equation and the “bicycle equation” are equivalent via an explicit transformation, as explained below.
To understand how the Schrödinger potential \(p\) generates the front wheel path, consider a particle of mass \(m = 1\) whose speed \(\upsilon\) is a prescribed function of time, and whose direction of motion is determined by the force
{F_N} = {a_N} = \upsilon(2 − \upsilon)
acting in the direction normal to the velocity vector (see Figure 3). Unlike the true magnetic force, this magnetic-like force is not linear in \(\upsilon\). We can think of our particle as a rocket with the “thrust” force acting tangentially to the path, and with the normal “magnetic” force (2) determined by \(\upsilon\). Given \(\upsilon = \upsilon(t)\), the law (2) determines the path \((X(t), Y(t))\) of the particle completely, provided that we fix the initial point and the initial direction. As an example, if we hold \(\upsilon\) constant, we get uniform circular motion, except in the case of \(\upsilon\) = 0 or 1, when uniform rectilinear motion results. Figure 4 shows trajectories for \(\upsilon=\alpha + \beta/(1+{t^2)}\) and \(\upsilon = \alpha + \beta~cos~t\). Trajectories for \(\upsilon\) of the form \(\upsilon = \alpha + \beta~cos~t + \gamma~cos~2t\) with various choices of \(\alpha\), \(\beta\), \(\gamma\) are shown in Figure 5.
Figure 4. Paths of a particle subject to the strange quasi-magnetic force for various choices of speed \(\upsilon = \upsilon(t)\). Each of these paths corresponds to a Schrödinger potential \(p = p(t) = 1 - \upsilon(t)\).
Given a Schrödinger potential\({p = p(t)}\), we define \(\upsilon = \upsilon(t) = 1 - p\). With \(\upsilon\) thus prescribed, consider the motion \((X(t), Y(t))\) of the “magnetic” particle, as outlined in the preceding paragraph. If we now think of \((X(t), Y(t))\) as the path of the front wheel \(F\), as shown in Figure 1, we find that the angles \(\theta\) of the bike and the solutions \(x\) of the Schrödinger equation are related via the transformation \[\begin{equation} \theta = 2~arg(x+i{\dot{x}})+\varphi,\end{equation}\tag{3}\] where \(\varphi=t+{\int_0^t}p(s)ds\). With the transformation (3), we have converted one problem into another. More explicit details on the equivalence can be found in [7].
Figure 5. Schrödinger potentials \(p = a + b~cos~t + c~cos~2t\) represented as equivalent bike paths for different choices of \(a\), \(b\), \(c\).
To summarize, each Schrödinger potential \(p\) can be represented as a front wheel path of an equivalent “bike” problem, as illustrated in Figures 4 and 5; in the former, the potential \(p = 1 − \upsilon\). In conclusion, the equivalence described raises the interesting prospect of translating known results obtained for one problem to better understand the other.
Acknowledgments: The author’s research was supported by NSF grant DMS–0605878.
[1] V.I. Arnold, Remarks on the perturbation theory for problems of Mathieu type, Russ. Math. Surveys, 38 (1983).
[2] H.W. Broer and C. Simo, Resonance tongues in Hill’s equations: A geometric approach, J. Differential Equations, 166:2 (2000), 290–327.
[3] R.L. Foote, Geometry of the Prytz planimeter, Rep. Math. Phys., 42:1–2 (1998), 249–271.
[4] R.L. Foote, M. Levi, and S. Tabachnikov, Tractrices, bicycle tire tracks, hatchet planimeters, and a 100-year-old conjecture, Amer. Math. Monthly, 120:3 (2013), 199–216.
[5] M. Levi, Geometry and physics of averaging with applications, Phys. D, 132 (1999), 150–164.
[6] M. Levi, Stability of the inverted pendulum—A topological explanation, SIAM Rev., 30 (1988), 639–644.
[7] M. Levi, Schrödinger’s equation and bike tracks—A connection, 2014;
[8] M. Levi and S. Tabachnikov, On bicycle tire tracks geometry, Menzin’s conjecture, and oscillation of unicycle tracks, Exp. Math., 18:2 (2009),173–186.
[9] D.M. Levy and J.B. Keller, Instability intervals of Hill’s equation, Comm. Pure Appl. Math., 16 (1963), 469–476.
[10] W. Paul, Electromagnetic traps for charged and neutral particles, Rev. Modern Phys., 62 (1990), 531–540.
[11] M.I. Weinstein and J.B. Keller, Asymptotic behavior of stability regions for Hill’s equation, SIAM J. Appl. Math., 47 (1987), 941–958.
Mark Levi ( is a professor of mathematics at Pennsylvania State University.
blog comments powered by Disqus |
401695fde3fa767b | Saturday, July 28, 2007
I am guilty of frequently using physics speech in daily life, an annoying habit I also noticed among many of my colleagues [1]. You'll find me stating "My brain feels very Boltzmannian today", or "The customer density in this store is too high for my metastable mental balance". I have a friend who calls Chinese take out "the canonical choice" and another friend who, when asked whether he had made a decision, famously explained "I don't yet want my wave-function to collapse". My ex-boyfriend once called it "the physicist's Tourette-syndrome" [2].
One of my favourite physics-speech words is self-consistent. Self-consistency is tightly related to nothing. You know, that "nothing" that causes your wife to conclude her whole life is a disaster, we're all going to die in a nuclear accident, her glasses vanished (again!), and btw that's all your fault (obviously). But if you ask her what's the matter. Well, nothing.
"There's nothing I hate more than nothing
Nothing keeps me up at night
I toss and turn over nothing
Nothing could cause a great big fight
Hey -- what's the matter?
Don't tell me nothing."
~Edie Brickell, Nothing
1. Self-consistent
Science is our attempt to understand the world we live in. We observe and try to find reliable rules upon which to build our expectations. We search for explanations that are useful to make predictions, a framework to understand our environment and shape our future according to our needs. If our observations disagree with our rules, or observations seemingly disagree with each other (I swear I left my glasses in the kitchen), we are irritated and try to find a mistake. Something being in contradiction with itself [3] is what I mean with not self-consistent (What's the matter? - Nothing!).
On a mathematical basis this is very straight forward. E.g. If you assume my mood is given by a real valued continuous function f on the compact interval [now, then] with f(now)f(then) smaller than 0, this isn't self-consistent with the expectation it can do so without having a zero [4]. For more details on my mood, see sidebar.
Self-consistency is a very powerful concept in theoretical physics: if one talks about a probability, that probability better should not be larger than one. If one starts with the axioms of quantum mechanics, it's not self-consistent to talk about a particle's definite position and momentum. The speed of light being observer independent is not compatible with Galileo invariance and the standard addition law for velocities. Instead, self-consistency requires the addition law to be modified. This lead Einstein to develop Special Relativity.
A particularly nice example comes from multi-particle quantum mechanics, where an iterative approach can be used to find a 'self-consistent' solution for the electron distribution e.g. in a crystal or for an atom with many electrons (see self-consistent field method or Hartree-Fock method). A state of several charged particles will not be just a tensor product of the single particles, since the particles interact and influence each other. One starts with the tensor product as a 'guess' and applies the 'rules' of the theory. That is, by solving the Schrödinger equation with the mean- field potential which effectively describes the interaction, a new set of single particle wave functions can be computed. This result will however in general not agree with the initial guess: it is not self-consistent. In this case, one repeats the procedure with using the result as an improved guess. Given that the differential equations behave nicely, this iterative procedure leads one to find a fixed point with the properties that the initial distribution agrees with the resulting one: it is self-consistent.
A similar requirement holds for quantum corrections. A theory that is subject to quantum corrections but whose initial formulation does not take into account the existence of such extra terms is strictly speaking not self-consistent (see also the interesting discussion to our recent post on Phenomenological Quantum Gravity).
There are some subtleties one needs to consider, most importantly that our knowledge is limited in various regards. Self-consistency might only hold under certain assumptions or in certain limiting regimes, like small velocities (relative to the speed of light), large distances (relative to the Planck length) or at energies below a certain threshold. Likewise, not being self-consistent might be the result of having applied a theory outside these limits (typically, using an expansion outside a radius of convergence). In some cases (gravitational backreaction), violations of self-consistency can be negligible.
However, one might argue if it is possible at all to arrive at such a disagreement then at least one of the assumptions was unnecessary to begin with, and could have been replaced by requiring self-consistency. Unfortunately, this is often more easily said than done -- physics is not mathematics. We rarely start with writing down a set of axioms which one could check for self-consistency. Instead, in many cases one starts with little more than a patchwork of hints, and an idea how to connect them. Self-consistency in this case is somewhat more subtle to check. My friends and I often kill each others ideas by working out nonsensical consequences. Here, at least as important as self-consistency is that a theory in physics also has to be consistent with observation.
2. Consistent with Observation
The classical Maxwell-Lorentz theory is self-consistent. However, it is in disagreement with the stability of the atom. According to the classical theory, an electron circling around the nucleus should radiate off energy. The solution to this problem was the development of quantum mechanics. The inconsistency in this case was one with observation. Without quantizing the orbits of the electron, atoms would not be stable, and we would not exist.
This requirement is specific to sciences that describe the real world out there. Such a theory can be 'wrong' (not consistent with observation) even though it is mathematically sound. Sometimes however, these two issues get confused. E.g. in a recent Discover issue, Seth Lloyd wrote:
"The vast majority of scientific ideas are (a) wrong and (b) useless. The briefest acquaintance with the real world shows that there are some forms of knowledge that will never be made scientific [...] I would bet that 99.8 percent of ideas put forth by scientists are wrong and will never be included in the body of scientific fact. Over the years, I have refereed many papers claiming to invalidate the laws of quantum mechanics. I’ve even written one or two of them myself. All of these papers are wrong. That is actually how it should be: What makes scientific ideas scientific is not that they are right but that they are capable of being proved wrong."
~Seth Lloyd, You know too much
The current issue now had a letter in reply to this article:
"I was taken aback by Seth Lloyd's assertion that "99.8 percent of ideas put forth by scientists are [probably] wrong" and even more so by his statement that "of the 0.2 percent of ideas that turn out to be correct ... [t]he great majority of them are relatively useless." His thesis omits a basic trait of what we call science -- that it is a continuous fabric, weaving all provable knowledge together [...] we do science for a science sake, because a fundamental principle of science is that we never know when a discovery will be useful"
~Eric Fisher, Springfield, IL.
Well, the majority of my scientific ideas are definitely (a) wrong and (b) useless, but these usually don't end up in a peer review process. However, the reply letter apparently referred to the word 'correct' as 'provable knowledge', and to science as the 'weave' of all that knowledge. It might indeed be that the mathematical framework of a theory that is not consistent with observation turns out to be useful later but that doesn't change the fact that this idea is 'wrong' in the meaning that it does not describe nature. Peer review today seems to be mostly concerned with checking self-consistency, whereas being non-consistent with observation is ironically increasingly tolerated as a 'known problem'. Like, the CC being 120 orders of magnitude too large is a known problem. Oohm, actually the result is just infinity. But, hey, you've turned your integration contour the wrong way, the result is not infinity, but infinity + 2 Pi.
The requirement of consistency with observation was for me the main reason to chose theoretical physics over maths. The world of mathematics, so I found, is too large for me and I got lost in following runaway thoughts, or generalizing concepts just because it was possible. It is the connection to the real world, provided by our observations, that can guide physicists through these possibilities and lead the way. (And, speaking of observations and getting lost, I'd really like to know where my glasses are.)
3. Self-contained
Unlike maths, theoretical physics aims to describes the real world out there. This advantageous guiding principle can also be a weakness when it comes to the quantities we deal with. Mathematics deals with well defined quantities whose properties are examined. In physics one wants to describe nature, and the exact definitions of the quantities are in many cases subject of discussion as well. Consider how our understanding of space and time has changed over the last centuries!
In physics it has often happened that concepts of a theory's constituents only developed with the theory itself (e.g. the notion of a tensor or the Fock-space). As such it happens in physics that one can deal with quantities even though the framework does not itself define them. One might say in such a case the theory is incomplete, or not self-contained.
Due to this complication, I've known more than one mathematician who frowned upon approaches in theoretical physics as too vague, whereas physicists often find mathematical rigour too constraining, and instead prefer to rely on their intuition. Joe Polchinski expressed this as follows:
"[A] chain of reasoning is only as strong as its weakest step. Rigor generally makes the strongest steps stronger still - to prove something it is necessary to understand the physics very well first - and so it is often not the critical point where the most effort should be applied. [A]nother problem with rigor [is]: it is hard to get it right. If one makes one error the whole thing breaks, whereas a good physical argument is more robust."
~Joe Polchinski, Guest Post at CV
When it comes to formulating an idea, physicists often set different priorities than mathematicians. In some cases it might just not be necessary to define a quantity because one can sit down and measure it (e.g. the PDFs). Or, one can just leave a question open (will be studied in a forthcoming publication) and get a useful theory nevertheless. All of our present theories leave questions open. Despite this being possible, it is unsatisfactory, and the attempt to make a theory self-contained has lead to many insights throughout the history of science.
Newton's dynamics deals with forces, yet there is nothing in this framework that explains the origin of a force. It contains masses, yet does not explain the origin of masses. Maxwell's theory provides an origin of a force (electromagnetic). It has a source term (J), yet it does not explain the dynamics of the source term. This system has to be closed, e.g. with minimal coupling to another field whose dynamics is known. The classical Maxwell-Lorentz theory does this, it is self-contained and self-consistent. However, as mentioned above, this theory is not consistent with observation. Today we know the sources for the electromagnetic field are fermions, they obey the Dirac equation and Fermi statistic. However, if you look at an atom close enough you'll notice that quantum electrodynamics alone also isn't able to describe it satisfactory...
Besides the existence of space and time per se, the number of space-time dimensions is one of these open questions that I find very interesting. It has most often been an additional assumption. An exception is string theory where self-consistency requires space-time to have a certain number of dimensions. However - if it also contains an explanation why we observe only three of them, nobody has yet found it. So again, we are left with open questions.
4. Simple and Natural [5]
The last guiding principle that I want to mention is simplicity, or the question whether one can reduce a messy system of axioms and principles to something more simple. Is there a way to derive the parameters of the standard model from a single unified approach? Is there a way to derive the axioms of quantization? Is there a way to derive that our spacetime has dimension three, or Lorentzian signature?
In my opinion, simplicity is often overrated compared to the first three points I listed. We tend to perceive simplicity as elegance or beauty, concepts we strive to achieve, but these guidelines can turn out to be false friends. If you can find your glasses, look around and you'll notice that the world has many facettes that are neither elegant nor simple (like my husband impatiently waiting for me to finish). Even if you'd expect the underlying laws of nature to be simple, you'll still have to make the case that a certain observable reflects the elementary theory rather than being a potentially very involved consequence of a complex dynamical system, or an emergent feature. A typical example are the average distances of planets from the sun, a Sacred Mystery of the Cosmos that today nobody would try to derive from a theory of first principles (restrictions apply).
Also, we tend to find things simpler the more familiar we are with them, up to the level of completely forgetting about them (did you say something?). E.g. we are so used to starting with a Lagrangian that we tend to forget that its usefulness rests on the validity of the action principle. It is also quite interesting to note that researchers who are familiar with a field often find it 'simple' and 'natural'... I therefore support Tommaso's suggestions to renormalize simplicity to the generalized grandmother.
In this regard I also want to highlight the argument that one can allegedly derive all the parameters in the standard model 'simply' from today's existence of intelligent life. Notwithstanding the additional complication of 'intelligent', could somebody please simply explain 'existence' and 'life'?
Much like classical electrodynamics, Einstein's field equations too have a source term whose dynamics one needs to know. The system can be closed with an equation of state for each component. This theory is self-consistent [6], and it is consistent with all available observations. It reaches its limits if one asks for the microscopic description of the constituents. The transition from the macro- to the microscopic regime can be made for the sources of the gravitational field, but not also for the coupled gravitational field (oh, and then there's the CC, but this is a known problem).
Two theories that yield the same predictions for all observables I'd call equivalent (if you don't like that, accept it as my definition of equivalence.) But our observations are limited, and unlike the case of classical electrodynamics not being consistent with the stability of the atom, there is presently no observational evidence in disagreement with classical gravity.
For me this then raises the question:
Is there more than one theory that is self-consistent, self-contained and consistent with all present observations?
In a recent comment, Moshe remarked:"To paraphrase Ted Jacobson, you don't quantize the metric for the same reason you don't go about quantizing ocean waves." That sounds certainly reasonable, but if I look at water close enough I will find the spectral lines of the hydrogen atom and evidence for its constituents. And their quantization. To me, this just doesn't satisfactory solve the question what the microscopic structure of the 'medium', here space-time, is.
And what have we learned from all this...?
Let me go back to the start: If you ask a question and the answer is 'Nothing', you most likely asked the wrong question, or misunderstood the answer.
Ah... Stefan found my glasses (don't ask).
See also: Self-Consistency at The Reference Frame
[1]This habit is especially dominant -- and not entirely voluntarily -- among the not native English speakers, whose vocabulary naturally is most developed in the job related area.
[2] Unintentional cursing and uttering of obscenities, called Coprolalia, is actually only a specific feature of the Tourette syndrom.
[3] However, some years ago I was taught the word 'self-consistency' in psychology has a different meaning, it refers to a person accumulating knowledge from his/her own behaviour. A person whose thoughts and actions are in agreement and not in contradiction is called 'clear'. (At least in German. I couldn't find any reference to this online, and I'm not a psychologist, so better don't trust me on that.).
[4] See:
Bolzano's theorem.
[5] "Woman on Window", by F.L. Campello.
For more, see here.
[6] Note that this theory is self-consistent at arbitrary scales as long as you don't ask for the microscopic origin of the sources.
TAGS: , ,
The most spherical object ever made... used for the gyroscopes in NASA's Gravity Probe B. Launched in April 2004, Gravity Probe B tests two effects predicted by Einstein's theory: the geodetic effect and the frame-dragging (see here for a brief intro).
In order for Gravity Probe B to measure these tiny effects, it must use a gyroscope that is nearly perfect—one that will not wobble or drift more than 10-12 degrees per hour while it is spinning.
"A nearly-perfect gyroscope must be nearly perfect in two ways: sphericity and homogeneity. Every point on its surface must be exactly the same distance from the center (a perfect sphere), and its structure must be identical from one side to the other [...]
After years of research and development, Gravity Probe B produced just such a gyroscope. It is a 1.5-inch sphere of fused quartz, polished and “lapped” to within a few atomic layers of perfect sphericity. A scan of its surface shows that only .01 microns separate the highest point from the lowest point. Transform the gyroscope into the size of the Earth and its highest mountains and deepest ocean trenches would be a mere eight feet from sea level!"
Thursday, July 26, 2007
FIAS, the Frankfurt Institute for Advanced Studies
This week, I was again at the new campus of my old university. The science departments of the Johann Wolfgang Goethe University are all moving out of downtown Frankfurt into the fields of Niederursel, where new buildings keep springing up at an extraordinary rate. One of these new buildings is especially eye-catching with its bright-red finish.
This is the new building of FIAS, the Frankfurt Institute for Advanced Studies, and it's interesting not only because of its colour - it's one of the first public research institutes in Germany financed to a large extent by the money of private sponsors.
Universities in Germany have traditionally been financed by public money of the state and federal governments, and they usually don't have large funds at their own. Frankfurt University is a bit special in this respect, since it has been founded in 1914 by wealthy Frankfurt citizens. While today it is a publicly funded university as it is common in Germany, there is a strong tradition of private sponsoring of research and higher education.
So, a few years ago, theoretical physicist Walter Greiner and neuroscientist Wolf Singer started using their connections to raise private funds to establish a new kind of institute, which was supposed to be legally independent, but closely connected to the university and its science departments. It should bring together theorists from such diverse areas as biology, chemistry, neuroscience, physics, and computer science in order to address problems all revolving around a common theme: The study of structure formation and self-organization in complex systems.
This was the beginning of FIAS.
Today, there are more than 50 scientists, guests and students working together on cooperative phenomena on length scales ranging from quarks in colour superconductivity and heavy ion collisions over atoms in atomic clusters and macromolecules to cells in the immune system and the brain. Details and more links can be found on the pages of the FIAS scientists.
The training of graduate students is organized in a Graduate School. Last summer, I was involved in the compilation of a brochure presenting the FIAS, and I was fascinated by the really inspiring atmosphere among the students, who come from all over the world and form very diverse scientific backgrounds, but were always involved in interesting discussions.
In September, the FIAS is supposed to move into the new, red building, which was built for the institute by a private sponsor, the Giersch Foundation. There, FIAS scientist will have a place to work and think - it will be interesting to follow the outcome of this kind of "experiment".
Tuesday, July 24, 2007
Don't fart
Okay, it's unlikely you visit this blog to hear my opinion about farting, but I just read this article in New Scientist
How the obesity epidemic is aggravating global warming
(Issue June 30th - July6th, p. 21)
which is the most ridiculous fart line up of weak links designed to support a specific opinion that I've come across lately. The argumentation of the author, Ian Roberts (a professor of public health in London), is roughly: if you're fat you are wasting energy. Either by storing fat such that it can't even be used as bio fuel, or by moving it around with the help of gasoline powered transportation devices.
To begin with, despite of what the title says, the author does not actually talk about global warming, but about wasting energy. The connection between both is just assumed in the first sentence with 'we know humans are causing [global warming]', and not even once addressed after this. On the other hand, also the connection between wasting energy and obesity is constructed to make the point that you should loose weight to save the earth:
"[...] it is becoming clear that obese people are having a direct impact on the climate. This is happening through their lifestyles and the amount and type of food they eat, and the worse the obesity epidemic gets the greater its impact on global warming."
Well, if one wants to criticize a lifestyle, then one should criticise a lifestyle, but not add several associative leaps after that. Let us start with asking what exactly is a 'waste' of energy? Using energy for purposes that do not necessarily improve our well-being could generally be considered a waste. That goes for breaking a cellphone (consider all the energy needed to produce it), browsing the web the whole day (your home wireless doesn't run on vacuum energy) as well as for unnecessary consumption of food for whose production energy was needed.
However, whether that food is actually eaten or thrown away is completely irrelevant in this context. Also, on an equal footing one can argue that the mere presence of diet products damages the climate: it takes energy to produce and transport them, but the energy gain after consumption is lowered. Is there any reason to waste energy on producing diet coke when one can as well drink water? And while we're at it, is there any reason to go jogging every morning - isn't that just a waste of energy? Come to think about it, civilization itself seems to be a waste of energy.
The article goes on arguing
"[...] his greater bulk and higher metabolic rate will cause him to feel the heat more in the globally warmed summers, and he will be the first to turn on the energy intensive air conditioning."
If one argues that overweight people turn on the AC more often because they sweat more easily, one might want to take into account that underweight (or generally sickly) people tend to turn on the heating more often. People who suffer from back pain, arthritis and shortness of breath might use their car more often (as the article states), but this must not necessarily be a cause of obesity. The only thing one can state is that being healthy and well adapted to the part of the world you live in minimizes the additional energy needed to survive and feel comfortable (how 'needed' relates to 'actually used' is a completely different question).
I am definitely in favor of more sidewalks, of increased awareness for health risks caused by obesity, and I totally agree that we should save energy. But I would appreciate a scientific discussion of these issues, and not a mixed up mesh of several issues all drowned in politcal correctness.
In a similar spirit I read last week several articles claiming "Meat is murder on the environment" or likewise, a 'conclusion' based on a paper "Evaluating environmental impacts of the Japanese beef cow–calf system by the life cycle assessment method" (published in Animal Science Journal 78 (4), 424–432)
Being a vegetarian myself, I could give you a good number of reasons to drop the meat, but nothing you wouldn't find online in some thousand other places, so let me just focus on the issue at hand. If you want to save energy with the food you buy and eat, the most important factor to consider is origin and transportation.
• Your apple from New-Zealand, labeled 'bio' or not, doesn't tunnel to you. In fact you could say since, unlike beef, vegetables and friuts consist mostly of water, the amount of gasoline needed per energy content (joule) of transported food is higher for greens. So, preferably buy stuff that was not transported all around the globe whenever you can.
• If you buy products from countries where slash and burn is still practiced, you're damaging the environment more than if you support your local farmer - even if he's somewhat more expensive than Safeway.
• And, needless to say, don't buy stuff you don't need. Each time you have to throw something away, you are throwing away all the energy that was necessary to produce it. That doesn't only go for food, but for everything else including wrappings.
I want to add that much like cows, human flatulence as well release methane, which is said to contribute to global warming. So maybe we should consider a national anti-fart campaign? Regarding the vegetarian factor, also please note that "The cellulose in vegetables cannot be digested, therefore vegetarians produce more gas than people with a mixed diet." [source]
The bottomline of this writing is: don't construct or publish ridiculous cross-relations that are scientifically doubtful for a catchy headline.
See also: Global Warming
Monday, July 23, 2007
This and That
• I am very proud to report that I eventually managed to install a recent-comments-box in the sidebar!! Thanks go via several detours back to Clifford.
• Flip has an excellent post on The Braneworld and the Hierarchy in the Randall Sundrum (I) model
• Hey America, Germany is catching up.
• Idea of the day: I suggest that journals which reject more than 70% of submitted manuscripts should offer a consolidation gift. What I have in mind is a shirt saying "My manuscript went to PRD and all I got was this lousy T-shirt".
• Ever felt like your brain is too small? Think twice (if you have capacity left): Man with tiny brain shocks doctors
• Coincidentally, I came across the German version of Lee Smolin's book Warum gibt es die Welt? (Life of the Cosmos), which I found somewhat disturbing (I mean, even more than the English version). Among other things (that concern Japanese surfer) I learned that New York is the largest city on the planet (such the re-translation). Apologies to the translator*, but should you consider buying that book, I strongly recommend the English version (to read the original sentence go to amazon, and search inside for "irrelevant content" - amazingly the result is only one hit).
• Quotation of the day:
Ralph W. Emerson, in Society and Solitude [Vol 7], Chapter VII: Works and Days
* It turned out my husband knows him personally. It's a small world...
Sunday, July 22, 2007
GZK cutoff confirmed
In an earlier post, Bee explained the physics behind the GZK (Greisen, Zatsepin and Kuzmin) cutoff: protons traveling through outer space will - when their energy crosses a certain threshold - no longer experience the universe as transparent. If their energy is high enough, the protons can scatter with the omnipresent photons of the Cosmic Microwave Background, and create pions. As a result, their mean free paths drops considerably and only very little of them are expected to reach earth. This threshold for photopion production for ultra high energetic protons is known as the GZK cutoff.
The presence of this cutoff had been observed by the HiRes cosmic ray array (Observation of the GZK Cutoff by the HiRes Experiment, arXiv:astro-ph/0703099), but had been disputed by the results from the Japanese detector AGASA (Akeno Giant Air Shower Array) which caused excitement when it failed to see the cut-off in data obtained up to 2004. A third experiment, the Pierre Auger Observatory on the plains of the Pampa Amarilla in western Argentina, which started taking data last year, now settled the question:
"If the AGASA had been correct, then we should have seen 30 events [at or above 1020 eV], and we see two," says Alan Watson, a physicist from the University of Leeds, U.K., and spokesperson for the Auger collaboration [source]. According to Watson, the data also suggests that these highest energy rays comprise protons and heavier nuclei, the latter of which don't feel the GZK drag.
The results were announced on the 30th International Cosmic Ray Conference in Merida, Yucatan, Mexico, and had a brief mentioning in Nature. The Nature article also points out that there is prospect of identifying the regions of the sources of the highest energetic particles, but these data are preliminary. "Unless I talk in my sleep, even my wife doesn't know what these regions are", as Watson was quoted in Nature.
And of course, now that there is new data, somebody is around to claim one needs an even larger experiment to understand it: "Now we understand that above the GZK cutoff there are ten times less cosmic rays than we thought 10 years ago, so we may need a detector ten times as big as Auger," says Masahiro Teshima of the Max Planck Institute for Physics in Munich, Germany, who worked on AGASA and is working on the Telescope Array [source].
The recent paper by the Pierre Auger collaboration with more details was on the arxiv last week:
The UHECR spectrum measured at the Pierre Auger Observatory and its astrophysical implications
T.Yamamoto, for the Pierre Auger Collaboration, arXiv:0707.2638
Abstract: The Southern part of the Pierre Auger Observatory is nearing completion, and has been in stable operation since January 2004 while it has grown in size. The large sample of data collected so far has led to a significant improvement in the measurement of the energy spectrum of UHE cosmic rays over that previously reported by the Pierre Auger Observatory, both in statistics and in systematic uncertainties. We summarize two measurements of the energy spectrum, one based on the high-statistics surface detize. The large sample of data collected so far has led to a significant improvement in the measurement of the energy spectrum of UHE cosmic rays over that previously reported by the Pierre Auger Observatory, both in statistics and in systematic uncertainties. We summarize two measurements of the energy spectrum, one based on the high-statistics surface detector data, and the other of the hybrid data, where the precision of the fluorescence measurements is enhanced by additional information from the surface array. The complementarity of the two approaches is emphasized and results are compared. Possible astrophysical implications of our measurements, and in particular the presence of spectral features, are discussed.
The upper end of the cosmic ray energy spectrum as measured by the Pierre Auger Observatory: The black dots represent data points, the blue and red curves are expectations derived from different models for the composition and energy distribution of the cosmic ray particles, all based on well-established physics including the GZK cutoff mechanism. Two events cannot be understood as stemming from protons, but may well be explained by heavier nuclei. (Figure from T. Yamamoto, The UHECR spectrum measured at the Pierre Auger Observatory and its astrophysical implications, ICRC'07; Credits: Auger Collaboration, technical information)
More plots and data can be found on the websites of the Pierre Auger Observatory.
Saturday, July 21, 2007
The LHC at Nature Insight
With less than a year to go before the start of the Large Hadron Collider at CERN, there has been a lot of media coverage about this huge collider lately - see e.g. at NYT, The New Yorker, and of course Bee's post The World's Largest Microscope.
Much more in-depth information on the physics, the history, and the engineering aspects of the LHC can be found in this week's Nature Insight: The Large Hadron Collider. Unfortunately, a subscription is required for the full content, but two interesting articles are freely available:
How the LHC came to be, by former CERN Director-General Chris Llewellyn Smith, on the political and organisational struggles involved with the building such an international, multi-billion euro machine, and Beyond the standard model with the LHC, by CERN theorist John Ellis (the guy with the penguins - see page 5), on the different options on possible new physics that might be discovered at the LHC.
Have a nice weekend!
Wednesday, July 18, 2007
Phenomenological Quantum Gravity
[This is the promised brief write-up of my talk at the Loops '07 in Morelia, slides can be found here, some more info about the conference here and here.
When I submitted the title for this talk, I actually expected a reply saying "Look. This is THE international conference on Quantum Gravity. We already have ten people speaking about phenomonelogy - could you be a bit more precise here?". But instead, I found myself joking I am the phenomenology of the conference. Therefore, I added a somewhat extended motivation to my talk which I found blog-suitable, so here it is.]
The standard model (SM) of particle physics [1] is an extremely precise theory and has demonstrated its predictive power over the last decades. But it has also left us with several unsolved problems, question that can not be answered - that can not even be addressed within the SM. There are the mysterious whys: why three families, three generations, three interactions, three spatial dimensions? Why these interactions, why these masses, and these couplings? There are the cosmological puzzles, there is dark matter and dark energy. And then there is the holy grail of quantum gravity (see also: my top ten unsolved physics problems).
There are two ways to attack these problems. The one is a top-down approach. Stating with a promising fundamental theory one tries to reach common ground and to connect to the standard model from a reductionist approach. The difficulty with this approach is that not only one needs that 'promising candidate for the fundamental theory', but most often one also has to come up with a whole new mathematical framework to deal with it. Most of the talks on the conference [2] were top down approaches. The other way is to start from what we know and extend the SM in a constructivist approach. Examples for that might be to take the SM Lagrangian and just add all kinds of higher order operators, thereby potentially giving up symmetries we know and like. The difficulty with this approach is to figure out what to do with all these potential extensions, and how to extract sensible knowledge about the fundamental theory from it.
I like it simple. Indeed, the most difficult thing about my work is how to pronounce 'phenomenology' (and I've practiced several years to manage that). So I picture myself somewhere in the middle. People have called that 'effective models' or 'test theories'. Others have called it 'cute' or 'nonsense'. I like to call it 'top-down inspired bottom-up approaches'. That is to say, I take some specific features that promising candidates for fundamental theories have, add them to the standard model and examine the phenomenology. Typical examples are e.g. just asking what the presence of extra dimensions lead to. Or the presence of a minimal length. Or a preferred reference frame. You might also examine what consequences it would have if the holographic principle or entropy bounds would hold. Or whether stochastic fluctuations of the background geometry would have observable consequences.
These approaches do not claim to be a fundamental theory of their own. Instead, they are simplified scenarios, suitable to examine certain features as to whether their realization would be compatible with reality. These models have their limitations, they are only approximations to a full theory. But to me, in a certain sense physics is the art of approximation. It is the art of figuring out what can be neglected, it is the art of building models, and the art of simplification.
"Science may be described as the art of systematic over-simplification."
~Karl Popper
One can imagine more beyond the standard model than just QG! So, if we are talking about phenomenology of quantum gravity we'll have to ask what we actually mean with that. To me, quantum gravity is the question how we can reconcile the apparent disagreements between classical General Relativity (GR) and QFT. And I say 'apparent' because nature knows how quantum objects fall, so there has to be a solution to that problem [3]. To be honest though, we don't even know that gravity is quantized at all.
I carefully state we don't 'know' because we've no observational evidence for gravity to be quantized whatsoever. (The fact that we don't understand how a quantized field can be coupled to an unquantized gravitational field doesn't mean it's impossible.) Indeed one can be sceptical about whether it's observable at all. This is reflected very aptly in the below quotation from Freeman Dyson, which I think is deliberately provocative and basically says my whole field of work doesn't exist:
"According to my hypothesis, the gravitational field described by Einstein's theory of general relativity is a purely classical field without any quantum behavior [...] If this hypothesis is true, we have two separate worlds, the classical world of gravitation and the quantum world of atoms, described by separate theories. The two theories are mathematically different and cannot be applied simultaneously. But no inconsistency can arise from using both theories, because any differences between their predictions are physically undetectable."
~Freeman Dyson [Source]
Well. Needless to say, I do think there there is phenomenology of QG that is in principle observable, even though we might not yet be able to observe it. And I do think that observing it will lead us a way to QG.
However, there are various scenarios that could be realized at Planckian energies. Gravity could be quantized within one or the other approach. Also, higher order terms in classical gravity could become important. Or, there could be semi-classical effects coming into the game. Now one tries to take some insights from these approaches, leading to the above mentioned phenomenological models. Already here one most often has a redundancy. That is, various scenarios can lead to the same effect. E.g. modified dispersion relations, or the Planck scale being a fundamental limit to our resolution are effects that show up in more than one approach. In addition, there's a second step in which these models are then used to make predictions. Again, various models, even though different, could yield the same predictions. That's what I like to call the 'inverse problem': how can we learn something about the underlying theory of quantum gravity from potential signatures?
In the figure below I stress 'new and old' phenomenology because a sensible model shouldn't only be useful to make new predictions, it should also reproduce all that stuff we know and like. I have a really hard time to take seriously a model that doesn't reproduce the standard model and GR in suitable limits.
Now here are some approaches in this category of 'top down inspired buttom up approaches' that I find very interesting (for some literature, see e.g. this list):
(And possibly we can maybe soon add macroscopic non-locality to that list, an interesting scenario that Fotini, Lee and Chanda are presently looking into.)
However, whenever one works within such a model one has to be aware of its limitations. E.g. the models with large extra dimensions are in my opinion such a case in which has been done what sensibly could be done. And now we'll have to turn on the LHC and see. After the original ideas had been outlined, many people began to build more and more specific models with a lot of extra features. It's not that I don't find that interesting, but it's somewhat besides the point. To me it's like building a house and worrying about the color of the curtains before the first brick has been laid.
Now, all of the approaches I've mentioned above are attempts to get definitive signatures of QG, but so far none of these predictions on its own would be really conclusive. Take e.g. a possible modification of the GZK cutoff - could have been 'new' physics, but not clear which, or maybe just some ununderstood 'old' physics, like the showers not being created by protons from outside our galaxy as generally assumed?
So, my suggestion to make progress in this regard is to construct models that are suitable to investigate observables in varios different areas. In such a way, we could be able to combine predictions and make them more conclusive. Think about the situation with GR at the beginning of the last century: It predicted a perihelion precession of Mercury, but there were other explanations like an additional planet, a quadrupole moment of the sun, or maybe a modification of Newtonian gravity. It took another observable - in this case light deflection by the sun - that was predicted within the same framework, and confirmed GR was the correct description of nature [4]. And please note, a factor 2 mattered here [5].
I personally am very optimistic about the future progress in quantum gravity - and that not only because it's hard to beat Dyson's pessimism. I think it doesn't matter where we start from, may it be a top-down, a buttom-up approach or somewhere in the middle. I also think it doesn't matter which direction each of us starts into. The history of science tells us that there often are various different ways to arrive at the same conclusion. A particularly nice example is how Schrödinger's wave formulation and Heisenberg's matrix approach turned eventually out to be part of the same theory.
I think as long as we listen to what our theories tell us, if we take into account what nature has to say, are willing to redirect our research according to this - and if we don't get lost in distractions along the way, then I think we have good chances to find a way to quantum gravity. And this finally solves the mystery of the quotation on the last slide of my talk:
'The problem is all inside your head' she said to me
The answer is easy if you take it logically
I’d like to help you in your struggle to be free
There must be fifty ways to [quantum gravity]
[1] In my notation the SM includes General Relativity.
[2] The exception being the very recommendable talk on
Effective Quantum Gravity by John F. Donoghue.
[3] Though 3 years living in the US have tought me there's actually no such thing as a 'problem' - it's called a challenge. One just has to like them, eh?
[4] Admittedly, what the measurement actually said was not as straight forward as one would have wished. I leave it to my husband to elaborate on this interesting part of the history of science.
[5] The resulting deviation can be reproduced in the Newtonian approach up to a factor 1/2.
TAGS: , ,
PS on Zeitgeist...
More at
Tuesday, July 17, 2007
AvH's 10 point plan
The Alexander von Humboldt Foundation is the master of science networking among the German non-profit foundations. If you've managed to get one of their scholarships you become part of their brotherhood for a lifetime, including a membership card - Unfortunately I don't know about the secret handshake, since I've never even applied. The largest drawback of their scholarships is that one can only apply to a host who is also a member (Humboldtianer!), which was the reason for me to choose the German Academic Exchange Service (DAAD) instead.
However, I've just found that AvH came up with a ten point plan of recommendations "for making Germany more attractive for international cutting-edge researchers". Their suggestions make a lot of sense to me and I find the press release worth mentioning. Even though some of it (2./7.) addresses specifically German problems, especially the points 9. and 10. apply to many other countries as well, so does 4., and 3. is generally a good idea (that I too have mentioned repeatedly, and in my opinion an issue that will become more important the more complex and global the scientific community becomes). Let us hope that all these pretty word-ideas will have concrete consequences in the not to far future.
For the full text, see here. In brief the points are:
1. More jobs for scientists and scholars
On average, German professors supervise 63 students. This is more than twice as many as the average at top-rank international universities.
2. Academic careers need planning certainty: establishing tenure track as an option for junior researchers
German universities must take measures to plan the career stage between a doctorate and a secure professorship and make it compatible internationally. On the pattern of the Anglo-Saxon tenure track, clear, qualifying steps should be defined at which decisions are made about remaining at an institution.
3. Career support as an advisory and supervisory task of academic managers
Senior academics as well as university and/or institute directors must play an active role in human resources development for their junior researchers. Young scientists and scholars need careers advice.
4. Promoting early independence by taking risks in financing research
By international comparison, young academics in Germany have less scope for decision-making and action. Funding programmes for early, independent research must be strengthened. Especially for researchers at an early stage in their careers, procedures should be profiled for research work involving an unknown risk factor.
5. Making recruitment and appointments more professional
Appointment procedures must have an open outcome and be transparent. To this end, commissions charged with appointments must include external or independent expert reviewers. Good academics should be appointed quickly. Internationally respected universities can no longer afford to take years over appointments, particularly as universities and research establishments now actively have to recruit junior researchers internationally to a much greater extent than they did in the past.
6. Dissolve staff appointment schemes and adapt management structures
Rigid staff appointment schemes must make way for flexible appointment options, or be dissolved. Independent junior research group leaders must be put on a par with junior professors within the universities and in collaborations between universities and non-university research establishments.
7. Creating special regulations for collective wage agreements in the academic sector
According to many of those involved, the new wage agreement for the public service sector is not commensurate with appropriate remuneration for academic and non-academic staff at non-university and university research establishments. By comparison with other pay-scales, it is not competitive, either nationally or internationally, it restricts mobility, and its rigid conditions do not take account of the special features of academic life.
8. Internationally competitive remuneration
It must be ensured that cutting-edge researchers can be offered internationally competitive remuneration. The framework for allocating remuneration to professors currently valid at universities leaves too little scope for this.
9. Internationalising social security benefits
Internationally mobile researchers often have to accept major disadvantages or financial losses with regard to pension rights.
10. Increasing transparency and creating an attractive working environment
• Academic employers in Germany must be put in a position to offer organisational and financial support for removal and relocation which is already the norm in other countries, especially when top-rank academic personnel are appointed.
• Child-care facilities for internationally mobile researchers at universities and non-university research establishments must be expanded quickly and extensively. International appointments in Germany still often fail because there is a lack of child-care facilities.
• Careers advice and support for (marital) partners seeking employment as well as so-called dual career advice or support for academic couples are required to attract internationally mobile researchers. Examples from abroad indicate that this does not necessarily mean concrete job offers ( which are often difficult to find), rather, intelligent counselling can satisfy many people's needs.
Related: See also The LHC Theory Initiative, The Terrascale Alliance, Temporary Display, and Temporary Display - Contd.
... is not only a German word that I've never heard a German actually using [1], but also the title of the new Smashing Pumpkins album. By coincidence, I've been wearing my ancient ZERO shirt last week, so I felt like it was my duty to pick up the CD.
It is an interesting album, but overall very disappointing. To begin with, I never liked Billy Corgan's voice, but if there's no way around it, it definitly goes better with melancholy and infinite sadness than with revolution. I mean, come on, he's composing a song in 2006 titled United States with lyrics saying "fight! I wanna fight! I wanna fight! revolution tonight!" and manages to sing such that it could as well have been about, say, compactification on Calabi Yau manifolds [2].
There are more politically flavored tracks on the album: For God and Country ("it's too late for some, it's too late for everyone") and Doomsday Clock ("it takes an unknown truth to get out, I'm guessing I'm born free, silly me") but the only thing worth mentioning about them is the fact there presently is a market for this. This tells a lot more about the 'Zeitgeist' than the music itself [3].
Most of the tracks on the CD sound extremely similar, drowned in an ever present electric guitar soup and exchangeable melodies. Billy Corgan is at his best with the slower and more thoughtful titles like e.g. Neverlost ("If you think just right, if you'll love you'll find, certain truths left behind").
Favourite tracks from previous albums: Disarm, To Sheila, Bullet with Butterfly Wings, 1979
[1] My husband proudly reports he can testify at least one incident in which one of his uncles, a Prof. for theology and philosophy, successfully used the word.
[2] That's why I call it a science blog.
[3] And while I am at it: the German 'ei' is pronounced like the English 'I' (or the beginning of the word 'aisle') in both places (whereas the German 'i' is pronounced like the English 'ee'). The German 'Z' is pronounced close to 'ts'. That is with 'Tsaitgaist', you'll make yourself understood better than with 'seetgeest'.
TAGS: , ,
Monday, July 16, 2007
What's new?
Nothing. Well, almost nothing.
• I dyed my hair. The color is galled 'ginger'. I'd have called it pumpkin. It actually looks like foul apricots. Say of the day so far 'What happened to your hair?' - 'It's an allergic reaction.' - 'To what?' - 'Stupid questions.' (As one can easily deduce, my conversation partner in this case obviously was not Canadian.)
• Though the plan was this year it would not be necessary to pack my household into boxes and drag them around, I will actually be moving twice before the end of the year. Don't ask. At least I am staying in town.
• My last plant, which suffered significantly during my previous trip, has surprisingly recovered (well, at least half of it), and is so not looking forward to my upcoming trip. This is to warn you that I'll be flying to Europe on Thursday, and be off and away for a while.
• I've found six degrees of freedom.
• I just saw this paper on the arxiv:
Search for Future Influence from L.H.C
By Holger B. Nielsen, Masao Ninomiya
Abstract: We propose an experiment which consists of pulling a card and use it to decide restrictions on the running of L.H.C. at CERN, such as luminosity, beam energy, or total shut down. The purpose of such an experiment is to look for influence from the future, backward causation. Since L.H.C. shall produce particles of a mathematically new type of fundamental scalars, i.e. the Higgs particles, there is potentially a chance to find hitherto unseen effects such as influence going from future to past, which we suggest in the present paper.
which features the idea that the nature of the Higgs field is such that it attempts to avoid its own production: "When the Higgs particle shall be produced, we shall retest if there could be influence from the future so that, for instance, the potential production of a large number of Higgs particles in a certain time development would cause a pre-arrangement so that the large number of Higgs productions, should be avoided."
Therefore - if this hypothesis is true - the LHC is likely to suffer an accident and has to be shut down. The argument is supported by the cancellation of the Superconducting Supercollider: "Thus it is really not unrealistic that precisely at the first a large number of Higgs production also our model-expectations that is influence from the future would show up. Very interestingly in this connection is that the S.S.C. in Texas accidentally would have been the first machine to produce Higgs on a large scale. However it were actually stopped after a quarter of the tunnel were built, almost a remarkable piece of bad luck."
The authors therefore propose to give backwards causation an economically less damaging possibility to avoid Higgs production by means of a card game that settles runs for the LHC, and permits for the possibility to shut down completely in a quiet and undesastrous way.
One should take this very seriously: "It must be warned that if our model were true and no such game about restricting strongly L.H.C. were played [...] then a “normal” (seemingly accidental) closure should occur. This could be potentially more damaging than just the loss of L.H.C. itself. Therefore not performing [...] our card game proposal could - if our model were correct - cause considerable danger."
I find this interesting as it gives a completely new spin to postdiction. See, we now can have a theory that disables its own observability by backward causation. So, one can actually post-dict something before it has happened, and then go back into the future. Makes me wonder though why the universe hasn't disabled itself even before nucleosynthesis. Maybe God doesn't playing dice with the universe, but instead card games?
• Have a good start into the week!
Saturday, July 14, 2007
First Light for the Gran Telescopio Canarias
Last night, the Gran Telescopio Canarias (GTC) at the Observatorio del Roque de los Muchachos of the European Northern Observatory (ENO) in La Palma, Canary Islands, Spain, saw its "First Light". The first star observed was Tycho 1205081, close to Polaris - a bit more photogenic is this shot of the pair of interacting galaxies UGC 10923 with extended star formation regions, taken with an exposure time of 50 seconds:
Interacting galaxies UGC 10923 seen with the eyes of the World's largest telescope (Credits: Gran Telescopio Canarias, Instituto de Astrofisica de Canarias)
The primary mirror of the new telescope consists is made up of 36 separate, hexagonal segments, fabricated at the Glaswerke Schott in Mainz, just around the corner from Frankfurt. Taken together, the segments have a light-collecting surface of 75.7 m2, which corresponds the a circular mirror with a diameter of 10.4 metres. At this size, it is the currently largest telescope for optical and near-infrared light!
The Gran Telescopio Canarias in La Palma, Canary Isles, in September 2006 (Credits: GTC project webcam)
This was in the news these days here (see e.g.,, or Le Monde), but the European Northern Observatory somehow has managed to issue a press release only in Spanish, so I am a bit at loss to find more details. Actually, the report in the FAZ is very good, and recalls the developments that lead to the construction of these huge telescopes:
I remember from the popular astronomy book I read as a kid that at that time the 5-metre mirror of the Mount Palomar telescope was thought to be the endpoint of the growth of telescope mirror size: Larger solid mirrors are to heavy and deform when the telescope is moved, and moreover, the image gets blurred anyway by the distortions caused to the light as it passes through the atmosphere. As a case in point, a 6-metre telescope in the Soviet Union was mentioned, which produced pictures of not as high a quality as expected from its size. I was quite disappointed when I read that.
Fortunately, both obstacles could be overcome with new technologies first realised in the 1990s: Active Optics, which means that the mirror is always kept in perfect shape by an array of motors and can therefore be lightweight, and large, and Adaptive Optics, which manages to compensate for the fluctuations of the density of air and allows for a seeing nearly as good as in space.
Among the big optical telescopes using these techniques - the Keck, Subaru and Gemini-North telescopes in Hawaii, the four mirrors of the Very Large Telescope and the Gemini-South telescope in Chile, the Large Binocular Telescope in Arizona, the Hobby-Eberly-telescope in Texas, and the South African Large Telescope in the South African Karoo - the Gran Telescopio Canarias is currently the largest one.
The good news is that all these telescopes will continue to take great shots of the Universe for the professionals and for armchair astronomers like me, even when the Hubble Space Telescope will once have stopped working.
Potentially Insane
If you have a look at the sidebar, you'll see that even the internet is presently bored! Here is what PI residents do when they go bonkers.
PI stands for... Probably Improbable, Politically Incorrect, Potentially Insane, Preon Infected, Problems Included, Proudly Ignorant, Promising Insults, Positively Irrational, Presently Insignificant, Philosophical Illusions, Physics Inside
Contributed submissions:
Promoting Ideas, Prain Included, Pump It, Plotting Infinity, Position Independent, Pissing Ion, Perfectly Intolerant, Protecting Insanity, Post Inflation, Plutonium Injection, Pain Intensifier, Premature Interruption, Positive Impact, Private Intrusion
And here is what Wikipedia had to add, see PI (disambiguation):
Primitive Instinct (sometimes), Public Intoxication (definitly), People's Initiative (more than useful), Principal Investigator (haven't seen one), Primary Immunodeficiency (not yet), Predictive Index (none), Provider Independent (that's what I dream of), Pass Interference (my job), Programmed Instruction (absent)
My apologies to the whole public outreach department. I expect a sentence of 4 months snow.
See also: 3.141592653589793238462...
Thursday, July 12, 2007
I once read a science fiction about the not-too far future. Our planet's flora became fed up with mankind, and decided to strike back. It began with plumbing problems - tree's roots destroying pipes, went on to grass breaking through the pavement and ivy growing over houses. I have to think about this each time when I see a tree causing cracks in a walkway, or grass growing in every possible and impossible place.
Tuesday, July 10, 2007
Shrinking Earth
No, this is not about a resuscitation of old ideas about the history of planet Earth, but these days I could learn that the Earth Is Smaller Than Assumed, according to geodesist from the University of Bonn who have discovered that the blue planet is really smaller than originally thought. Well - not really, I would say: these guys are talking about 5 millimetre, or 0.2 inch.
Anyway, this accurate result is really impressive! It results from the combined analysis of radio signals from distant quasars, observed by a worldwide net of more than 70 radio telescopes. Characteristic features in the radio signals from quasars are received at slightly different times at different places on Earth, and the combination of these measurements using the technique of Very Long Baseline Interferometry allows a very precise determination of the relative distance of the radio telescopes: These relative distances can be deduced up to 2 millimetre on 1000 km, or up to 2 parts per billion (ppb). From the network of radio telescopes distributed all around the globe, it is possible to calculate its dimension very precisely. This analysis, accomplished with improved precision over previous similar work by the Bonn geodesist, yields a diameter of the Earth 5 millimetre smaller than supposed so far. According to a report in the New Scientist about this result, the total diameter of the Earth at the equator is around 12,756.274 kilometres (7,926.3812 miles).
Axel Nothnagel of the University of Bonn, who heads the team that provided new and more accurate data about the diameter of the Earth. (Credits: University of Bonn Press Release, July 5, 2007, Frank Luerweg)
A propos shrinking Earth: Earth was shrinking by a huge step, in a metaphorical way, 45 years ago today, as I heard this morning on the radio: On July 10, 1962, TELSTAR was launched from Cape Canaveral, the first communications satellite which allowed live TV broadcast between Europe and North America, bridging by the speed of light a distance that is steadily growing by 18 millimetre per year...
The TELSTAR communications satellite, launched 45 years ago today (Source: Wikipedia on Telstar)
PS: The paper by the Axel Nothnagel team is: The contribution of Very Long Baseline Interferometry to ITRF2005, by Markus Vennebusch, Sarah Böckmann and Axel Nothnagel, Journal of Geodesy 81 (2007) 553-564, DOI: 10.1007/s00190-006-0117-x. If someone can tell me where I can find the 5 millimetre in that paper, I am very grateful ;-)
Today on the Arxiv
Today I came across this very entertaining paper
Hollywood Blockbusters: Unlimited Fun but Limited Science Literacy
By C.J. Efthimiou, R.A. Llewellyn
Abstract: In this article, we examine specific scenes from popular action and sci-fi movies and show how they blatantly break the laws of physics, all in the name of entertainment, but coincidentally contributing to science illiteracy.
I didn't even know there is an arxiv for Physics and Society. The authors conclude with
"Hollywood is reinforcing (or even creating) incorrect scientific attitudes that can have negative results for the society. This is a good reason to recommend that all citizens be taught critical thinking and be required to develop basic science and quantitative literacy."
It's hard to disagree with that recommendation, even without reading the paper. Though I have to say if somebody has the scientific attitude he might survive a jump from the 15th floor, I guess natural selection will take care of that. For most cases I think we've all been taught from earliest childhood on not to mix up fiction with reality... That is, except for those of us who end up in theoretical physics, involuntarily or on purpose bending and breaking the laws of nature on our notebooks.
Update: See also The Physics of Nonphysical Systems.
Monday, July 09, 2007
Monday Links
In case you're just sitting at breakfast looking for a good read:
Sunday, July 08, 2007
The LHC Theory Initiative
Want proof that the grass is always greener on the other side? I just read this article
Refilling the Physicist Pool
about the LHC theory initiative:
"We are behind the Europeans, and we believe very strongly that we shouldn't just leave this work to the Europeans," Baur said in a UB statement. [...]
Funding in the US for particle physics as a whole and theoretical particle physics in particular has declined significantly over the past 15 years, Baur said. In addition, physics departments in US universities tend to hire faculty members who develop innovative ideas, whereas in Europe, the physics culture puts equal emphasis on novel research and solid calculations that help advance the field as a whole. But with the Large Hadron Collider -- the world's largest particle accelerator -- coming online in the next year or sooner, Baur said, the US cannot afford to fall behind."
It's interesting that in the US ideas are 'innovative' whereas in Europe they are 'novel' (especially since both refers to a field that is several decades old, and hasn't seen very much novelty lately). Admittedly, I find the perspective of a 'physics culture' that produces 'solid' Next-to-next-to-next-to-next-to leading order calculations somewhat depressing.
For German counterpart, see also the Terrascale Alliance.
Saturday, July 07, 2007
I spent half of the day trying to sort through all that stuff which has accumulated on my desk while I was away. My efforts where impressively unsuccessful. The only thing that came out of this was the poem below. I think I'll go for a walk, buy a lighter and then give it a second try.
Cardboard boxes, paper piles,
Unread books, and many files,
Coffee cups and empty cans,
Post-its, trash and broken pens.
Unpaid bills, forgotten friends,
Pieces, broken in my hands,
Wedding photos in between
Notebooks and a magazine.
Plastic plants, a moving box,
And a pair of unmatched socks,
Unfinished, and missing pieces,
Leave me wondering where peace is.
[For more, check my website]
... I actually think I have a lighter... if only I could find it... what a mess!
Friday, July 06, 2007
It's all about sex...
... yes, we already knew that. Men are intelligent to impress women, and women are intelligent to find the best men. That's why you're sitting on your desk, chewing a pen, trying to quantize gravity.
Here's what Psychology tells us today (Source: Ten Politically Incorrect Truths About Human Nature, by Alan S. Miller and Satoshi Kanazawa):
Well, and once you've destroyed a civilization and sufficiently impressed every women that was 'fit' enough to survive, keep in mind that by your human nature you are actually polygamous because it's an evolutionary advantage:
And I'm sure 6 feet 4 also come in handy for changing light-bulbs. On the other hand, there are certain natural selection mechanism in societies which tolerate polygamy. As you'll also learn from the above article, suicide terrorists are dominantly Muslim because a) polygamy increases competition among men and b) because they are promised 72 virgins in heaven. (If only things were that simple. I still think airline passengers should stroke pigs before boarding, definitly preferable to throwing away my Coke each time I go through security.)
Also, sorry to report, but having children is statistically seen a bad idea for men when it comes to the peak of the crime-and-creativity curve:
"These calculations have been performed by natural and sexual selection, so to speak, which then equips male brains with a psychological mechanism to incline them to be increasingly competitive immediately after puberty and make them less competitive right after the birth of their first child. Men simply do not feel like acting violently, stealing, or conducting additional scientific experiments, or they just want to settle down after the birth of their child but they do not know exactly why."
I especially like the part with 'they don't know why'. And finally, a Harvard professor solved the puzzle why men prefer D-cups:
Well, I think there's truth in it, as my age seems to be incredibly hard to judge. Related, you'll be interested to hear that a recent study shows Women Don't Talk More Than Guys:
"The researchers placed microphones on 396 college students for periods ranging from two to 10 days, sampled their conversations and calculated how many words they used in the course of a day. The score: Women, 16,215. Men, 15,669.The difference: 546 words: "Not statistically significant," say the researchers."
Have a nice weekend. Have fun. Reproduce. Go, discover a new country or write a sonnet.
Thursday, July 05, 2007
The Planck Scale
The Planck scales - a length and a mass* - indicate the limits in which we expect quantum gravitational effects to become important
Gravity coupled to matter requires a coupling constant G that has units of length over mass. One finds the Planck scale if one lets quantum mechanics come into the game. For this, let us consider a quantum particle of a (so far unknown) mass mp with a Compton wavelength lp, the relation between both given by the Planck constant
This is the quantum input. Now consider that particle to be as localized as it is possible taking into account its quantum properties. That is, the mass mp is localized within a space-time region with extensions given by the particle's own Compton wavelength. The higher the mass of that particle, the smaller the wavelength. However, we know that General Relativity says if we push a fixed amount of mass together in a smaller and smaller region, it will eventually form a black hole. More general, one can ask when the perturbation of the metric that this particle causes will be of order one:
which then can be solved for the mass, and subsequently for the length scale we were looking for. If one puts in some numbers one finds
These Planck scales thus indicate the limit in which the quantum properties of our particle will cause a non-negligible perturbation of the space-time metric, and we really have to worry about how to reconcile the classical with the quantum regime. Compared to energies that can be reached at the collider (the LHC will have a center of mass energy of the order 10 TeV), the Planck mass is huge. This reflects the fact that the gravitational force between elementary particles is very weak compared to the the other forces that we know, and this is what makes it so hard to experimentally observe quantum gravitational effect.
Max Planck introduced these quantities in 1899, the paper (it's in German) is available online
(Credits to Stefan for finding it). You'll find the natural mass scales introduced on page 479ff. He didn't call them 'Planck' scales then, and it is also interesting why he found them useful to introduce, namely because the aliens would also use them
"It is interesting to note that with the help of the [above constants] it is possible to introduce units [...] which [...] remain meaningful for all times and also for extraterrestrial and non-human cultures, and therefore can be understood as 'natural units'."
Coincidentally, yesterday I saw a paper on the arxiv
What is Special About the Planck Mass?
By C. Sivaram
Abstract: Planck introduced his famous units of mass, length and time a hundred years ago. The many interesting facets of the Planck mass and length are explored. The Planck mass ubiquitously occurs in astrophysics, cosmology, quantum gravity, string theory, etc. Current aspects of its implications for unification of fundamental interactions, energy dependence of coupling constants, dark energy, etc. are discussed.
which gives a nice introduction into the appearances of various mass scales in physics, with some historical notes.
* With the speed of light set to be equal 1, in which case a length is the same as a time. It you find that confusing, just define a Planck time by dividing the length through the speed of light. |
58e133acfa22e3b7 | Wave Function Functionals
Document Type
Publication Date
We extend our prior work on the construction of variational wave functions ψ that are functionals of functions χ:ψ=ψ[χ] rather than simply being functions. In this manner, the space of variations is expanded over those of traditional variational wave functions. In this article we perform the constrained search over the functions χ chosen such that the functional ψ[χ] satisfies simultaneously the constraints of normalization and the exact expectation value of an arbitrary single- or two-particle Hermitian operator, while also leading to a rigorous upper bound to the energy. As such the wave function functional is accurate not only in the region of space in which the principal contributions to the energy arise but also in the other region of the space represented by the Hermitian operator. To demonstrate the efficacy of these ideas, we apply such a constrained search to the ground state of the negative ion of atomic hydrogen H-, the helium atom He, and its positive ions Li+ and Be2+. The operators W whose expectations are obtained exactly are the sum of the single-particle operators W=∑irin,n=-2,-1,1,2, W=∑iδ(ri), W=-1/2∑ii2, and the two-particle operators W=∑nun,n=-2,-1,1,2, where u=|ri-rj|. Comparisons with the method of Lagrangian multipliers and of other constructions of wave-function functionals are made. Finally, we present further insights into the construction of wave-function functionals by studying a previously proposed construction of functionals ψ[χ] that lead to the exact expectation of arbitrary Hermitian operators. We discover that analogous to the solutions of the Schrödinger equation, there exist ψ[χ] that are unphysical in that they lead to singular values for the expectations. We also explain the origin of the singularity.
Originally published:
Pan, Xiao-Yin, Slamet, Marlina, Sahni, Viraht. "Wave Function Functionals." Physical Review A 81.4 (2010). DOI: 10.1103/PhysRevA.81.042524 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.